00:00:00.000 Started by upstream project "autotest-per-patch" build number 126199 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "jbp-per-patch" build number 23960 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.028 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.029 The recommended git tool is: git 00:00:00.029 using credential 00000000-0000-0000-0000-000000000002 00:00:00.038 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.057 Fetching changes from the remote Git repository 00:00:00.062 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.079 Using shallow fetch with depth 1 00:00:00.079 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.079 > git --version # timeout=10 00:00:00.091 > git --version # 'git version 2.39.2' 00:00:00.091 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.102 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.102 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/56/22956/10 # timeout=5 00:00:04.937 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.947 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.962 Checking out Revision d49304e16352441ae7eebb2419125dd094201f3e (FETCH_HEAD) 00:00:04.962 > git config core.sparsecheckout # timeout=10 00:00:04.972 > git read-tree -mu HEAD # timeout=10 00:00:04.991 > git checkout -f d49304e16352441ae7eebb2419125dd094201f3e # timeout=5 00:00:05.026 Commit message: "jenkins/jjb-config: Add ubuntu2404 to per-patch and nightly testing" 00:00:05.026 > git rev-list --no-walk 5fe533b64b2bcae2206a8f61fddcc62257280cde # timeout=10 00:00:05.143 [Pipeline] Start of Pipeline 00:00:05.156 [Pipeline] library 00:00:05.157 Loading library shm_lib@master 00:00:05.157 Library shm_lib@master is cached. Copying from home. 00:00:05.172 [Pipeline] node 00:00:05.182 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.184 [Pipeline] { 00:00:05.193 [Pipeline] catchError 00:00:05.194 [Pipeline] { 00:00:05.206 [Pipeline] wrap 00:00:05.214 [Pipeline] { 00:00:05.221 [Pipeline] stage 00:00:05.223 [Pipeline] { (Prologue) 00:00:05.398 [Pipeline] sh 00:00:05.678 + logger -p user.info -t JENKINS-CI 00:00:05.697 [Pipeline] echo 00:00:05.699 Node: GP11 00:00:05.706 [Pipeline] sh 00:00:05.997 [Pipeline] setCustomBuildProperty 00:00:06.009 [Pipeline] echo 00:00:06.010 Cleanup processes 00:00:06.017 [Pipeline] sh 00:00:06.300 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.301 943447 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.313 [Pipeline] sh 00:00:06.592 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.592 ++ awk '{print $1}' 00:00:06.592 ++ grep -v 'sudo pgrep' 00:00:06.592 + sudo kill -9 00:00:06.592 + true 00:00:06.603 [Pipeline] cleanWs 00:00:06.610 [WS-CLEANUP] Deleting project workspace... 00:00:06.610 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.616 [WS-CLEANUP] done 00:00:06.619 [Pipeline] setCustomBuildProperty 00:00:06.629 [Pipeline] sh 00:00:06.905 + sudo git config --global --replace-all safe.directory '*' 00:00:06.980 [Pipeline] httpRequest 00:00:07.012 [Pipeline] echo 00:00:07.014 Sorcerer 10.211.164.101 is alive 00:00:07.022 [Pipeline] httpRequest 00:00:07.026 HttpMethod: GET 00:00:07.027 URL: http://10.211.164.101/packages/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:07.027 Sending request to url: http://10.211.164.101/packages/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:07.046 Response Code: HTTP/1.1 200 OK 00:00:07.047 Success: Status code 200 is in the accepted range: 200,404 00:00:07.048 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:11.339 [Pipeline] sh 00:00:11.619 + tar --no-same-owner -xf jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:11.633 [Pipeline] httpRequest 00:00:11.654 [Pipeline] echo 00:00:11.656 Sorcerer 10.211.164.101 is alive 00:00:11.663 [Pipeline] httpRequest 00:00:11.667 HttpMethod: GET 00:00:11.668 URL: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:11.668 Sending request to url: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:11.693 Response Code: HTTP/1.1 200 OK 00:00:11.694 Success: Status code 200 is in the accepted range: 200,404 00:00:11.694 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:02:37.836 [Pipeline] sh 00:02:38.116 + tar --no-same-owner -xf spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:02:40.662 [Pipeline] sh 00:02:40.943 + git -C spdk log --oneline -n5 00:02:40.943 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:02:40.943 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:02:40.943 2d30d9f83 accel: introduce tasks in sequence limit 00:02:40.943 2728651ee accel: adjust task per ch define name 00:02:40.943 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:02:40.956 [Pipeline] } 00:02:40.974 [Pipeline] // stage 00:02:40.984 [Pipeline] stage 00:02:40.986 [Pipeline] { (Prepare) 00:02:41.004 [Pipeline] writeFile 00:02:41.020 [Pipeline] sh 00:02:41.344 + logger -p user.info -t JENKINS-CI 00:02:41.357 [Pipeline] sh 00:02:41.633 + logger -p user.info -t JENKINS-CI 00:02:41.645 [Pipeline] sh 00:02:41.927 + cat autorun-spdk.conf 00:02:41.927 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:41.927 SPDK_TEST_NVMF=1 00:02:41.927 SPDK_TEST_NVME_CLI=1 00:02:41.927 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:41.927 SPDK_TEST_NVMF_NICS=e810 00:02:41.927 SPDK_TEST_VFIOUSER=1 00:02:41.927 SPDK_RUN_UBSAN=1 00:02:41.927 NET_TYPE=phy 00:02:41.933 RUN_NIGHTLY=0 00:02:41.936 [Pipeline] readFile 00:02:41.958 [Pipeline] withEnv 00:02:41.960 [Pipeline] { 00:02:41.973 [Pipeline] sh 00:02:42.251 + set -ex 00:02:42.251 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:42.251 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:42.251 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.251 ++ SPDK_TEST_NVMF=1 00:02:42.251 ++ SPDK_TEST_NVME_CLI=1 00:02:42.251 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:42.251 ++ SPDK_TEST_NVMF_NICS=e810 00:02:42.251 ++ SPDK_TEST_VFIOUSER=1 00:02:42.251 ++ SPDK_RUN_UBSAN=1 00:02:42.251 ++ NET_TYPE=phy 00:02:42.251 ++ RUN_NIGHTLY=0 00:02:42.251 + case $SPDK_TEST_NVMF_NICS in 00:02:42.251 + DRIVERS=ice 00:02:42.251 + [[ tcp == \r\d\m\a ]] 00:02:42.251 + [[ -n ice ]] 00:02:42.251 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:42.251 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:42.251 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:42.251 rmmod: ERROR: Module irdma is not currently loaded 00:02:42.251 rmmod: ERROR: Module i40iw is not currently loaded 00:02:42.251 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:42.251 + true 00:02:42.251 + for D in $DRIVERS 00:02:42.251 + sudo modprobe ice 00:02:42.251 + exit 0 00:02:42.261 [Pipeline] } 00:02:42.279 [Pipeline] // withEnv 00:02:42.286 [Pipeline] } 00:02:42.304 [Pipeline] // stage 00:02:42.314 [Pipeline] catchError 00:02:42.316 [Pipeline] { 00:02:42.332 [Pipeline] timeout 00:02:42.333 Timeout set to expire in 50 min 00:02:42.335 [Pipeline] { 00:02:42.353 [Pipeline] stage 00:02:42.356 [Pipeline] { (Tests) 00:02:42.372 [Pipeline] sh 00:02:42.647 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:42.647 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:42.647 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:42.647 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:42.647 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.647 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:42.647 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:42.647 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:42.647 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:42.647 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:42.647 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:42.647 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:42.647 + source /etc/os-release 00:02:42.647 ++ NAME='Fedora Linux' 00:02:42.647 ++ VERSION='38 (Cloud Edition)' 00:02:42.647 ++ ID=fedora 00:02:42.647 ++ VERSION_ID=38 00:02:42.647 ++ VERSION_CODENAME= 00:02:42.647 ++ PLATFORM_ID=platform:f38 00:02:42.647 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:42.647 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:42.647 ++ LOGO=fedora-logo-icon 00:02:42.647 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:42.647 ++ HOME_URL=https://fedoraproject.org/ 00:02:42.647 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:42.647 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:42.647 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:42.647 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:42.647 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:42.647 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:42.647 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:42.647 ++ SUPPORT_END=2024-05-14 00:02:42.647 ++ VARIANT='Cloud Edition' 00:02:42.647 ++ VARIANT_ID=cloud 00:02:42.647 + uname -a 00:02:42.647 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:42.647 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:43.581 Hugepages 00:02:43.581 node hugesize free / total 00:02:43.581 node0 1048576kB 0 / 0 00:02:43.581 node0 2048kB 0 / 0 00:02:43.581 node1 1048576kB 0 / 0 00:02:43.581 node1 2048kB 0 / 0 00:02:43.581 00:02:43.581 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:43.581 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:43.581 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:43.581 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:43.581 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:43.581 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:43.581 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:43.581 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:43.581 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:43.581 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:43.581 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:43.581 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:43.581 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:43.581 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:43.581 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:43.581 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:43.581 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:43.581 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:43.582 + rm -f /tmp/spdk-ld-path 00:02:43.582 + source autorun-spdk.conf 00:02:43.582 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.582 ++ SPDK_TEST_NVMF=1 00:02:43.582 ++ SPDK_TEST_NVME_CLI=1 00:02:43.582 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:43.582 ++ SPDK_TEST_NVMF_NICS=e810 00:02:43.582 ++ SPDK_TEST_VFIOUSER=1 00:02:43.582 ++ SPDK_RUN_UBSAN=1 00:02:43.582 ++ NET_TYPE=phy 00:02:43.582 ++ RUN_NIGHTLY=0 00:02:43.582 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:43.582 + [[ -n '' ]] 00:02:43.582 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.840 + for M in /var/spdk/build-*-manifest.txt 00:02:43.840 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:43.840 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:43.840 + for M in /var/spdk/build-*-manifest.txt 00:02:43.840 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:43.840 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:43.840 ++ uname 00:02:43.840 + [[ Linux == \L\i\n\u\x ]] 00:02:43.840 + sudo dmesg -T 00:02:43.840 + sudo dmesg --clear 00:02:43.840 + dmesg_pid=944750 00:02:43.840 + [[ Fedora Linux == FreeBSD ]] 00:02:43.840 + sudo dmesg -Tw 00:02:43.840 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.840 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.840 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:43.840 + [[ -x /usr/src/fio-static/fio ]] 00:02:43.840 + export FIO_BIN=/usr/src/fio-static/fio 00:02:43.840 + FIO_BIN=/usr/src/fio-static/fio 00:02:43.840 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:43.840 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:43.840 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:43.840 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.840 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.840 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:43.840 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.840 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.840 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:43.840 Test configuration: 00:02:43.840 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.840 SPDK_TEST_NVMF=1 00:02:43.840 SPDK_TEST_NVME_CLI=1 00:02:43.840 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:43.840 SPDK_TEST_NVMF_NICS=e810 00:02:43.840 SPDK_TEST_VFIOUSER=1 00:02:43.840 SPDK_RUN_UBSAN=1 00:02:43.840 NET_TYPE=phy 00:02:43.840 RUN_NIGHTLY=0 15:44:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:43.840 15:44:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:43.840 15:44:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.840 15:44:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.840 15:44:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.840 15:44:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.841 15:44:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.841 15:44:10 -- paths/export.sh@5 -- $ export PATH 00:02:43.841 15:44:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.841 15:44:10 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:43.841 15:44:10 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:43.841 15:44:10 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721051050.XXXXXX 00:02:43.841 15:44:10 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721051050.TYx9lp 00:02:43.841 15:44:10 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:43.841 15:44:10 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:43.841 15:44:10 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:43.841 15:44:10 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:43.841 15:44:10 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:43.841 15:44:10 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:43.841 15:44:10 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:43.841 15:44:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.841 15:44:10 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:43.841 15:44:10 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:43.841 15:44:10 -- pm/common@17 -- $ local monitor 00:02:43.841 15:44:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.841 15:44:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.841 15:44:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.841 15:44:10 -- pm/common@21 -- $ date +%s 00:02:43.841 15:44:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.841 15:44:10 -- pm/common@21 -- $ date +%s 00:02:43.841 15:44:10 -- pm/common@25 -- $ sleep 1 00:02:43.841 15:44:10 -- pm/common@21 -- $ date +%s 00:02:43.841 15:44:10 -- pm/common@21 -- $ date +%s 00:02:43.841 15:44:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051050 00:02:43.841 15:44:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051050 00:02:43.841 15:44:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051050 00:02:43.841 15:44:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051050 00:02:43.841 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051050_collect-vmstat.pm.log 00:02:43.841 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051050_collect-cpu-load.pm.log 00:02:43.841 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051050_collect-cpu-temp.pm.log 00:02:43.841 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051050_collect-bmc-pm.bmc.pm.log 00:02:44.771 15:44:11 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:44.771 15:44:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:44.771 15:44:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:44.771 15:44:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.771 15:44:11 -- spdk/autobuild.sh@16 -- $ date -u 00:02:44.771 Mon Jul 15 01:44:11 PM UTC 2024 00:02:44.771 15:44:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:44.771 v24.09-pre-209-ga95bbf233 00:02:44.771 15:44:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:44.771 15:44:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:44.771 15:44:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:44.771 15:44:11 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:44.771 15:44:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:44.771 15:44:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.771 ************************************ 00:02:44.771 START TEST ubsan 00:02:44.771 ************************************ 00:02:44.771 15:44:11 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:44.771 using ubsan 00:02:44.771 00:02:44.771 real 0m0.000s 00:02:44.771 user 0m0.000s 00:02:44.771 sys 0m0.000s 00:02:44.771 15:44:11 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:44.771 15:44:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:44.771 ************************************ 00:02:44.771 END TEST ubsan 00:02:44.771 ************************************ 00:02:45.028 15:44:11 -- common/autotest_common.sh@1142 -- $ return 0 00:02:45.028 15:44:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:45.028 15:44:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:45.028 15:44:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:45.028 15:44:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:45.028 15:44:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:45.028 15:44:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:45.028 15:44:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:45.028 15:44:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:45.028 15:44:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:45.028 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:45.028 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:45.287 Using 'verbs' RDMA provider 00:02:55.818 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:05.797 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:05.797 Creating mk/config.mk...done. 00:03:05.797 Creating mk/cc.flags.mk...done. 00:03:05.797 Type 'make' to build. 00:03:05.797 15:44:31 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:05.797 15:44:31 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:05.797 15:44:31 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:05.797 15:44:31 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.797 ************************************ 00:03:05.797 START TEST make 00:03:05.797 ************************************ 00:03:05.797 15:44:31 make -- common/autotest_common.sh@1123 -- $ make -j48 00:03:05.797 make[1]: Nothing to be done for 'all'. 00:03:06.744 The Meson build system 00:03:06.744 Version: 1.3.1 00:03:06.744 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:06.744 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:06.744 Build type: native build 00:03:06.744 Project name: libvfio-user 00:03:06.744 Project version: 0.0.1 00:03:06.744 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:06.744 C linker for the host machine: cc ld.bfd 2.39-16 00:03:06.744 Host machine cpu family: x86_64 00:03:06.744 Host machine cpu: x86_64 00:03:06.744 Run-time dependency threads found: YES 00:03:06.744 Library dl found: YES 00:03:06.744 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:06.744 Run-time dependency json-c found: YES 0.17 00:03:06.744 Run-time dependency cmocka found: YES 1.1.7 00:03:06.744 Program pytest-3 found: NO 00:03:06.744 Program flake8 found: NO 00:03:06.744 Program misspell-fixer found: NO 00:03:06.744 Program restructuredtext-lint found: NO 00:03:06.744 Program valgrind found: YES (/usr/bin/valgrind) 00:03:06.744 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:06.744 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:06.744 Compiler for C supports arguments -Wwrite-strings: YES 00:03:06.744 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:06.744 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:06.744 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:06.744 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:06.744 Build targets in project: 8 00:03:06.744 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:06.744 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:06.744 00:03:06.744 libvfio-user 0.0.1 00:03:06.744 00:03:06.744 User defined options 00:03:06.744 buildtype : debug 00:03:06.744 default_library: shared 00:03:06.744 libdir : /usr/local/lib 00:03:06.744 00:03:06.744 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:07.696 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:07.696 [1/37] Compiling C object samples/null.p/null.c.o 00:03:07.958 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:07.958 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:07.958 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:07.958 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:07.958 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:07.958 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:07.958 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:07.958 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:07.958 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:07.958 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:07.958 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:07.958 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:07.958 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:07.958 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:07.958 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:07.958 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:07.958 [18/37] Compiling C object samples/client.p/client.c.o 00:03:07.958 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:07.958 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:07.958 [21/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:07.958 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:07.958 [23/37] Compiling C object samples/server.p/server.c.o 00:03:07.958 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:07.958 [25/37] Linking target samples/client 00:03:07.958 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:08.222 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:08.222 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:08.222 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:08.222 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:08.222 [31/37] Linking target test/unit_tests 00:03:08.484 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:08.484 [33/37] Linking target samples/server 00:03:08.484 [34/37] Linking target samples/lspci 00:03:08.484 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:08.484 [36/37] Linking target samples/gpio-pci-idio-16 00:03:08.484 [37/37] Linking target samples/null 00:03:08.484 INFO: autodetecting backend as ninja 00:03:08.484 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:08.484 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:09.427 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:09.427 ninja: no work to do. 00:03:14.705 The Meson build system 00:03:14.705 Version: 1.3.1 00:03:14.705 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:14.705 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:14.705 Build type: native build 00:03:14.705 Program cat found: YES (/usr/bin/cat) 00:03:14.705 Project name: DPDK 00:03:14.705 Project version: 24.03.0 00:03:14.705 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:14.705 C linker for the host machine: cc ld.bfd 2.39-16 00:03:14.705 Host machine cpu family: x86_64 00:03:14.705 Host machine cpu: x86_64 00:03:14.705 Message: ## Building in Developer Mode ## 00:03:14.705 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:14.705 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:14.705 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:14.705 Program python3 found: YES (/usr/bin/python3) 00:03:14.705 Program cat found: YES (/usr/bin/cat) 00:03:14.705 Compiler for C supports arguments -march=native: YES 00:03:14.705 Checking for size of "void *" : 8 00:03:14.705 Checking for size of "void *" : 8 (cached) 00:03:14.705 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:14.705 Library m found: YES 00:03:14.705 Library numa found: YES 00:03:14.705 Has header "numaif.h" : YES 00:03:14.705 Library fdt found: NO 00:03:14.705 Library execinfo found: NO 00:03:14.705 Has header "execinfo.h" : YES 00:03:14.705 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:14.705 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:14.705 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:14.705 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:14.705 Run-time dependency openssl found: YES 3.0.9 00:03:14.705 Run-time dependency libpcap found: YES 1.10.4 00:03:14.705 Has header "pcap.h" with dependency libpcap: YES 00:03:14.705 Compiler for C supports arguments -Wcast-qual: YES 00:03:14.706 Compiler for C supports arguments -Wdeprecated: YES 00:03:14.706 Compiler for C supports arguments -Wformat: YES 00:03:14.706 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:14.706 Compiler for C supports arguments -Wformat-security: NO 00:03:14.706 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:14.706 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:14.706 Compiler for C supports arguments -Wnested-externs: YES 00:03:14.706 Compiler for C supports arguments -Wold-style-definition: YES 00:03:14.706 Compiler for C supports arguments -Wpointer-arith: YES 00:03:14.706 Compiler for C supports arguments -Wsign-compare: YES 00:03:14.706 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:14.706 Compiler for C supports arguments -Wundef: YES 00:03:14.706 Compiler for C supports arguments -Wwrite-strings: YES 00:03:14.706 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:14.706 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:14.706 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:14.706 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:14.706 Program objdump found: YES (/usr/bin/objdump) 00:03:14.706 Compiler for C supports arguments -mavx512f: YES 00:03:14.706 Checking if "AVX512 checking" compiles: YES 00:03:14.706 Fetching value of define "__SSE4_2__" : 1 00:03:14.706 Fetching value of define "__AES__" : 1 00:03:14.706 Fetching value of define "__AVX__" : 1 00:03:14.706 Fetching value of define "__AVX2__" : (undefined) 00:03:14.706 Fetching value of define "__AVX512BW__" : (undefined) 00:03:14.706 Fetching value of define "__AVX512CD__" : (undefined) 00:03:14.706 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:14.706 Fetching value of define "__AVX512F__" : (undefined) 00:03:14.706 Fetching value of define "__AVX512VL__" : (undefined) 00:03:14.706 Fetching value of define "__PCLMUL__" : 1 00:03:14.706 Fetching value of define "__RDRND__" : 1 00:03:14.706 Fetching value of define "__RDSEED__" : (undefined) 00:03:14.706 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:14.706 Fetching value of define "__znver1__" : (undefined) 00:03:14.706 Fetching value of define "__znver2__" : (undefined) 00:03:14.706 Fetching value of define "__znver3__" : (undefined) 00:03:14.706 Fetching value of define "__znver4__" : (undefined) 00:03:14.706 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:14.706 Message: lib/log: Defining dependency "log" 00:03:14.706 Message: lib/kvargs: Defining dependency "kvargs" 00:03:14.706 Message: lib/telemetry: Defining dependency "telemetry" 00:03:14.706 Checking for function "getentropy" : NO 00:03:14.706 Message: lib/eal: Defining dependency "eal" 00:03:14.706 Message: lib/ring: Defining dependency "ring" 00:03:14.706 Message: lib/rcu: Defining dependency "rcu" 00:03:14.706 Message: lib/mempool: Defining dependency "mempool" 00:03:14.706 Message: lib/mbuf: Defining dependency "mbuf" 00:03:14.706 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:14.706 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:14.706 Compiler for C supports arguments -mpclmul: YES 00:03:14.706 Compiler for C supports arguments -maes: YES 00:03:14.706 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:14.706 Compiler for C supports arguments -mavx512bw: YES 00:03:14.706 Compiler for C supports arguments -mavx512dq: YES 00:03:14.706 Compiler for C supports arguments -mavx512vl: YES 00:03:14.706 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:14.706 Compiler for C supports arguments -mavx2: YES 00:03:14.706 Compiler for C supports arguments -mavx: YES 00:03:14.706 Message: lib/net: Defining dependency "net" 00:03:14.706 Message: lib/meter: Defining dependency "meter" 00:03:14.706 Message: lib/ethdev: Defining dependency "ethdev" 00:03:14.706 Message: lib/pci: Defining dependency "pci" 00:03:14.706 Message: lib/cmdline: Defining dependency "cmdline" 00:03:14.706 Message: lib/hash: Defining dependency "hash" 00:03:14.706 Message: lib/timer: Defining dependency "timer" 00:03:14.706 Message: lib/compressdev: Defining dependency "compressdev" 00:03:14.706 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:14.706 Message: lib/dmadev: Defining dependency "dmadev" 00:03:14.706 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:14.706 Message: lib/power: Defining dependency "power" 00:03:14.706 Message: lib/reorder: Defining dependency "reorder" 00:03:14.706 Message: lib/security: Defining dependency "security" 00:03:14.706 Has header "linux/userfaultfd.h" : YES 00:03:14.706 Has header "linux/vduse.h" : YES 00:03:14.706 Message: lib/vhost: Defining dependency "vhost" 00:03:14.706 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:14.706 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:14.706 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:14.706 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:14.706 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:14.706 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:14.706 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:14.706 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:14.706 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:14.706 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:14.706 Program doxygen found: YES (/usr/bin/doxygen) 00:03:14.706 Configuring doxy-api-html.conf using configuration 00:03:14.706 Configuring doxy-api-man.conf using configuration 00:03:14.706 Program mandb found: YES (/usr/bin/mandb) 00:03:14.706 Program sphinx-build found: NO 00:03:14.706 Configuring rte_build_config.h using configuration 00:03:14.706 Message: 00:03:14.706 ================= 00:03:14.706 Applications Enabled 00:03:14.706 ================= 00:03:14.706 00:03:14.706 apps: 00:03:14.706 00:03:14.706 00:03:14.706 Message: 00:03:14.706 ================= 00:03:14.706 Libraries Enabled 00:03:14.706 ================= 00:03:14.706 00:03:14.706 libs: 00:03:14.706 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:14.706 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:14.706 cryptodev, dmadev, power, reorder, security, vhost, 00:03:14.706 00:03:14.706 Message: 00:03:14.706 =============== 00:03:14.706 Drivers Enabled 00:03:14.706 =============== 00:03:14.706 00:03:14.706 common: 00:03:14.706 00:03:14.706 bus: 00:03:14.706 pci, vdev, 00:03:14.706 mempool: 00:03:14.706 ring, 00:03:14.706 dma: 00:03:14.706 00:03:14.706 net: 00:03:14.706 00:03:14.706 crypto: 00:03:14.706 00:03:14.706 compress: 00:03:14.706 00:03:14.706 vdpa: 00:03:14.706 00:03:14.706 00:03:14.706 Message: 00:03:14.706 ================= 00:03:14.706 Content Skipped 00:03:14.706 ================= 00:03:14.706 00:03:14.706 apps: 00:03:14.706 dumpcap: explicitly disabled via build config 00:03:14.706 graph: explicitly disabled via build config 00:03:14.706 pdump: explicitly disabled via build config 00:03:14.706 proc-info: explicitly disabled via build config 00:03:14.706 test-acl: explicitly disabled via build config 00:03:14.706 test-bbdev: explicitly disabled via build config 00:03:14.706 test-cmdline: explicitly disabled via build config 00:03:14.706 test-compress-perf: explicitly disabled via build config 00:03:14.706 test-crypto-perf: explicitly disabled via build config 00:03:14.706 test-dma-perf: explicitly disabled via build config 00:03:14.706 test-eventdev: explicitly disabled via build config 00:03:14.707 test-fib: explicitly disabled via build config 00:03:14.707 test-flow-perf: explicitly disabled via build config 00:03:14.707 test-gpudev: explicitly disabled via build config 00:03:14.707 test-mldev: explicitly disabled via build config 00:03:14.707 test-pipeline: explicitly disabled via build config 00:03:14.707 test-pmd: explicitly disabled via build config 00:03:14.707 test-regex: explicitly disabled via build config 00:03:14.707 test-sad: explicitly disabled via build config 00:03:14.707 test-security-perf: explicitly disabled via build config 00:03:14.707 00:03:14.707 libs: 00:03:14.707 argparse: explicitly disabled via build config 00:03:14.707 metrics: explicitly disabled via build config 00:03:14.707 acl: explicitly disabled via build config 00:03:14.707 bbdev: explicitly disabled via build config 00:03:14.707 bitratestats: explicitly disabled via build config 00:03:14.707 bpf: explicitly disabled via build config 00:03:14.707 cfgfile: explicitly disabled via build config 00:03:14.707 distributor: explicitly disabled via build config 00:03:14.707 efd: explicitly disabled via build config 00:03:14.707 eventdev: explicitly disabled via build config 00:03:14.707 dispatcher: explicitly disabled via build config 00:03:14.707 gpudev: explicitly disabled via build config 00:03:14.707 gro: explicitly disabled via build config 00:03:14.707 gso: explicitly disabled via build config 00:03:14.707 ip_frag: explicitly disabled via build config 00:03:14.707 jobstats: explicitly disabled via build config 00:03:14.707 latencystats: explicitly disabled via build config 00:03:14.707 lpm: explicitly disabled via build config 00:03:14.707 member: explicitly disabled via build config 00:03:14.707 pcapng: explicitly disabled via build config 00:03:14.707 rawdev: explicitly disabled via build config 00:03:14.707 regexdev: explicitly disabled via build config 00:03:14.707 mldev: explicitly disabled via build config 00:03:14.707 rib: explicitly disabled via build config 00:03:14.707 sched: explicitly disabled via build config 00:03:14.707 stack: explicitly disabled via build config 00:03:14.707 ipsec: explicitly disabled via build config 00:03:14.707 pdcp: explicitly disabled via build config 00:03:14.707 fib: explicitly disabled via build config 00:03:14.707 port: explicitly disabled via build config 00:03:14.707 pdump: explicitly disabled via build config 00:03:14.707 table: explicitly disabled via build config 00:03:14.707 pipeline: explicitly disabled via build config 00:03:14.707 graph: explicitly disabled via build config 00:03:14.707 node: explicitly disabled via build config 00:03:14.707 00:03:14.707 drivers: 00:03:14.707 common/cpt: not in enabled drivers build config 00:03:14.707 common/dpaax: not in enabled drivers build config 00:03:14.707 common/iavf: not in enabled drivers build config 00:03:14.707 common/idpf: not in enabled drivers build config 00:03:14.707 common/ionic: not in enabled drivers build config 00:03:14.707 common/mvep: not in enabled drivers build config 00:03:14.707 common/octeontx: not in enabled drivers build config 00:03:14.707 bus/auxiliary: not in enabled drivers build config 00:03:14.707 bus/cdx: not in enabled drivers build config 00:03:14.707 bus/dpaa: not in enabled drivers build config 00:03:14.707 bus/fslmc: not in enabled drivers build config 00:03:14.707 bus/ifpga: not in enabled drivers build config 00:03:14.707 bus/platform: not in enabled drivers build config 00:03:14.707 bus/uacce: not in enabled drivers build config 00:03:14.707 bus/vmbus: not in enabled drivers build config 00:03:14.707 common/cnxk: not in enabled drivers build config 00:03:14.707 common/mlx5: not in enabled drivers build config 00:03:14.707 common/nfp: not in enabled drivers build config 00:03:14.707 common/nitrox: not in enabled drivers build config 00:03:14.707 common/qat: not in enabled drivers build config 00:03:14.707 common/sfc_efx: not in enabled drivers build config 00:03:14.707 mempool/bucket: not in enabled drivers build config 00:03:14.707 mempool/cnxk: not in enabled drivers build config 00:03:14.707 mempool/dpaa: not in enabled drivers build config 00:03:14.707 mempool/dpaa2: not in enabled drivers build config 00:03:14.707 mempool/octeontx: not in enabled drivers build config 00:03:14.707 mempool/stack: not in enabled drivers build config 00:03:14.707 dma/cnxk: not in enabled drivers build config 00:03:14.707 dma/dpaa: not in enabled drivers build config 00:03:14.707 dma/dpaa2: not in enabled drivers build config 00:03:14.707 dma/hisilicon: not in enabled drivers build config 00:03:14.707 dma/idxd: not in enabled drivers build config 00:03:14.707 dma/ioat: not in enabled drivers build config 00:03:14.707 dma/skeleton: not in enabled drivers build config 00:03:14.707 net/af_packet: not in enabled drivers build config 00:03:14.707 net/af_xdp: not in enabled drivers build config 00:03:14.707 net/ark: not in enabled drivers build config 00:03:14.707 net/atlantic: not in enabled drivers build config 00:03:14.707 net/avp: not in enabled drivers build config 00:03:14.707 net/axgbe: not in enabled drivers build config 00:03:14.707 net/bnx2x: not in enabled drivers build config 00:03:14.707 net/bnxt: not in enabled drivers build config 00:03:14.707 net/bonding: not in enabled drivers build config 00:03:14.707 net/cnxk: not in enabled drivers build config 00:03:14.707 net/cpfl: not in enabled drivers build config 00:03:14.707 net/cxgbe: not in enabled drivers build config 00:03:14.707 net/dpaa: not in enabled drivers build config 00:03:14.707 net/dpaa2: not in enabled drivers build config 00:03:14.707 net/e1000: not in enabled drivers build config 00:03:14.707 net/ena: not in enabled drivers build config 00:03:14.707 net/enetc: not in enabled drivers build config 00:03:14.707 net/enetfec: not in enabled drivers build config 00:03:14.707 net/enic: not in enabled drivers build config 00:03:14.707 net/failsafe: not in enabled drivers build config 00:03:14.707 net/fm10k: not in enabled drivers build config 00:03:14.707 net/gve: not in enabled drivers build config 00:03:14.707 net/hinic: not in enabled drivers build config 00:03:14.707 net/hns3: not in enabled drivers build config 00:03:14.707 net/i40e: not in enabled drivers build config 00:03:14.707 net/iavf: not in enabled drivers build config 00:03:14.707 net/ice: not in enabled drivers build config 00:03:14.707 net/idpf: not in enabled drivers build config 00:03:14.707 net/igc: not in enabled drivers build config 00:03:14.707 net/ionic: not in enabled drivers build config 00:03:14.707 net/ipn3ke: not in enabled drivers build config 00:03:14.707 net/ixgbe: not in enabled drivers build config 00:03:14.707 net/mana: not in enabled drivers build config 00:03:14.707 net/memif: not in enabled drivers build config 00:03:14.707 net/mlx4: not in enabled drivers build config 00:03:14.707 net/mlx5: not in enabled drivers build config 00:03:14.707 net/mvneta: not in enabled drivers build config 00:03:14.707 net/mvpp2: not in enabled drivers build config 00:03:14.707 net/netvsc: not in enabled drivers build config 00:03:14.707 net/nfb: not in enabled drivers build config 00:03:14.707 net/nfp: not in enabled drivers build config 00:03:14.707 net/ngbe: not in enabled drivers build config 00:03:14.707 net/null: not in enabled drivers build config 00:03:14.707 net/octeontx: not in enabled drivers build config 00:03:14.707 net/octeon_ep: not in enabled drivers build config 00:03:14.707 net/pcap: not in enabled drivers build config 00:03:14.707 net/pfe: not in enabled drivers build config 00:03:14.707 net/qede: not in enabled drivers build config 00:03:14.707 net/ring: not in enabled drivers build config 00:03:14.707 net/sfc: not in enabled drivers build config 00:03:14.707 net/softnic: not in enabled drivers build config 00:03:14.707 net/tap: not in enabled drivers build config 00:03:14.707 net/thunderx: not in enabled drivers build config 00:03:14.707 net/txgbe: not in enabled drivers build config 00:03:14.707 net/vdev_netvsc: not in enabled drivers build config 00:03:14.707 net/vhost: not in enabled drivers build config 00:03:14.707 net/virtio: not in enabled drivers build config 00:03:14.707 net/vmxnet3: not in enabled drivers build config 00:03:14.707 raw/*: missing internal dependency, "rawdev" 00:03:14.707 crypto/armv8: not in enabled drivers build config 00:03:14.708 crypto/bcmfs: not in enabled drivers build config 00:03:14.708 crypto/caam_jr: not in enabled drivers build config 00:03:14.708 crypto/ccp: not in enabled drivers build config 00:03:14.708 crypto/cnxk: not in enabled drivers build config 00:03:14.708 crypto/dpaa_sec: not in enabled drivers build config 00:03:14.708 crypto/dpaa2_sec: not in enabled drivers build config 00:03:14.708 crypto/ipsec_mb: not in enabled drivers build config 00:03:14.708 crypto/mlx5: not in enabled drivers build config 00:03:14.708 crypto/mvsam: not in enabled drivers build config 00:03:14.708 crypto/nitrox: not in enabled drivers build config 00:03:14.708 crypto/null: not in enabled drivers build config 00:03:14.708 crypto/octeontx: not in enabled drivers build config 00:03:14.708 crypto/openssl: not in enabled drivers build config 00:03:14.708 crypto/scheduler: not in enabled drivers build config 00:03:14.708 crypto/uadk: not in enabled drivers build config 00:03:14.708 crypto/virtio: not in enabled drivers build config 00:03:14.708 compress/isal: not in enabled drivers build config 00:03:14.708 compress/mlx5: not in enabled drivers build config 00:03:14.708 compress/nitrox: not in enabled drivers build config 00:03:14.708 compress/octeontx: not in enabled drivers build config 00:03:14.708 compress/zlib: not in enabled drivers build config 00:03:14.708 regex/*: missing internal dependency, "regexdev" 00:03:14.708 ml/*: missing internal dependency, "mldev" 00:03:14.708 vdpa/ifc: not in enabled drivers build config 00:03:14.708 vdpa/mlx5: not in enabled drivers build config 00:03:14.708 vdpa/nfp: not in enabled drivers build config 00:03:14.708 vdpa/sfc: not in enabled drivers build config 00:03:14.708 event/*: missing internal dependency, "eventdev" 00:03:14.708 baseband/*: missing internal dependency, "bbdev" 00:03:14.708 gpu/*: missing internal dependency, "gpudev" 00:03:14.708 00:03:14.708 00:03:14.708 Build targets in project: 85 00:03:14.708 00:03:14.708 DPDK 24.03.0 00:03:14.708 00:03:14.708 User defined options 00:03:14.708 buildtype : debug 00:03:14.708 default_library : shared 00:03:14.708 libdir : lib 00:03:14.708 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:14.708 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:14.708 c_link_args : 00:03:14.708 cpu_instruction_set: native 00:03:14.708 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:14.708 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:14.708 enable_docs : false 00:03:14.708 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:14.708 enable_kmods : false 00:03:14.708 max_lcores : 128 00:03:14.708 tests : false 00:03:14.708 00:03:14.708 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:14.708 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:14.708 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:14.708 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:14.708 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:14.708 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:14.708 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:14.708 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:14.708 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:14.708 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:14.708 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:14.708 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:14.708 [11/268] Linking static target lib/librte_kvargs.a 00:03:14.708 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:14.708 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:14.708 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:14.969 [15/268] Linking static target lib/librte_log.a 00:03:14.969 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:15.552 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.552 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:15.552 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:15.552 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:15.552 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:15.552 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:15.552 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:15.552 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:15.552 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:15.552 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:15.552 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:15.552 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:15.552 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:15.552 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:15.552 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:15.552 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:15.552 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:15.552 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:15.552 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:15.552 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:15.552 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:15.552 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:15.815 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:15.815 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:15.815 [41/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:15.815 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:15.815 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:15.815 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:15.815 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:15.815 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:15.815 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:15.815 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:15.815 [49/268] Linking static target lib/librte_telemetry.a 00:03:15.815 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:15.815 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:15.815 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:15.815 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:15.815 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:15.815 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:15.815 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:15.815 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:15.815 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:15.815 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:15.815 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:15.815 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:15.815 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:15.815 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:15.815 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:16.084 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:16.084 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.084 [67/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:16.084 [68/268] Linking target lib/librte_log.so.24.1 00:03:16.084 [69/268] Linking static target lib/librte_pci.a 00:03:16.352 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:16.352 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:16.352 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:16.352 [73/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:16.352 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:16.611 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:16.611 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:16.611 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:16.611 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:16.611 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:16.611 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:16.611 [81/268] Linking target lib/librte_kvargs.so.24.1 00:03:16.611 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:16.611 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:16.611 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:16.611 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:16.611 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:16.611 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:16.611 [88/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:16.611 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:16.611 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:16.611 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:16.611 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:16.611 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:16.611 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:16.611 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:16.611 [96/268] Linking static target lib/librte_ring.a 00:03:16.611 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:16.611 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:16.611 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:16.611 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:16.611 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:16.611 [102/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:16.611 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:16.873 [104/268] Linking static target lib/librte_meter.a 00:03:16.873 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:16.873 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:16.873 [107/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.873 [108/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.873 [109/268] Linking static target lib/librte_eal.a 00:03:16.873 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:16.873 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:16.873 [112/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:16.873 [113/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:16.873 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:16.873 [115/268] Linking static target lib/librte_rcu.a 00:03:16.873 [116/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:16.873 [117/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:16.873 [118/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:16.873 [119/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:16.873 [120/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:16.873 [121/268] Linking target lib/librte_telemetry.so.24.1 00:03:16.873 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:16.873 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:16.873 [124/268] Linking static target lib/librte_mempool.a 00:03:17.133 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:17.133 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:17.133 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:17.133 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:17.133 [129/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:17.133 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:17.133 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:17.133 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:17.133 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:17.133 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.133 [135/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:17.403 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:17.403 [137/268] Linking static target lib/librte_net.a 00:03:17.403 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:17.403 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:17.403 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:17.403 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.403 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:17.403 [143/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.681 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:17.681 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:17.681 [146/268] Linking static target lib/librte_cmdline.a 00:03:17.681 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:17.681 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:17.681 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:17.681 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:17.681 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:17.681 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:17.681 [153/268] Linking static target lib/librte_timer.a 00:03:17.681 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:17.681 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:17.681 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.681 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:17.681 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:17.940 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:17.940 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:17.940 [161/268] Linking static target lib/librte_dmadev.a 00:03:17.940 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:17.940 [163/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:17.940 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:17.940 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:17.940 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:17.940 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:17.940 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.196 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:18.196 [170/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.196 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:18.196 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:18.196 [173/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:18.196 [174/268] Linking static target lib/librte_power.a 00:03:18.196 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:18.196 [176/268] Linking static target lib/librte_hash.a 00:03:18.196 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:18.196 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:18.196 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:18.196 [180/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:18.196 [181/268] Linking static target lib/librte_compressdev.a 00:03:18.196 [182/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:18.196 [183/268] Linking static target lib/librte_reorder.a 00:03:18.196 [184/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:18.196 [185/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:18.196 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:18.453 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.453 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:18.453 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:18.453 [190/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.453 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:18.453 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:18.453 [193/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:18.453 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:18.453 [195/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:18.453 [196/268] Linking static target lib/librte_mbuf.a 00:03:18.453 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:18.453 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:18.453 [199/268] Linking static target lib/librte_security.a 00:03:18.453 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:18.453 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:18.453 [202/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:18.453 [203/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.453 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:18.453 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:18.453 [206/268] Linking static target drivers/librte_bus_vdev.a 00:03:18.710 [207/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.710 [208/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.710 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:18.710 [210/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:18.710 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:18.710 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:18.710 [213/268] Linking static target drivers/librte_mempool_ring.a 00:03:18.710 [214/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.710 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:18.710 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:18.710 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:18.710 [218/268] Linking static target drivers/librte_bus_pci.a 00:03:18.710 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.968 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.968 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:18.968 [222/268] Linking static target lib/librte_ethdev.a 00:03:18.968 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.968 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:18.968 [225/268] Linking static target lib/librte_cryptodev.a 00:03:19.225 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.158 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.126 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:23.025 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.025 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.282 [231/268] Linking target lib/librte_eal.so.24.1 00:03:23.282 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:23.282 [233/268] Linking target lib/librte_meter.so.24.1 00:03:23.282 [234/268] Linking target lib/librte_ring.so.24.1 00:03:23.282 [235/268] Linking target lib/librte_pci.so.24.1 00:03:23.282 [236/268] Linking target lib/librte_timer.so.24.1 00:03:23.282 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:23.282 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:23.560 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:23.560 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:23.560 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:23.560 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:23.560 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:23.560 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:23.560 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:23.560 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:23.560 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:23.560 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:23.560 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:23.560 [250/268] Linking target lib/librte_mbuf.so.24.1 00:03:23.817 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:23.817 [252/268] Linking target lib/librte_net.so.24.1 00:03:23.817 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:23.817 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:23.817 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:24.075 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:24.075 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:24.075 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:24.075 [259/268] Linking target lib/librte_hash.so.24.1 00:03:24.075 [260/268] Linking target lib/librte_security.so.24.1 00:03:24.075 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:24.075 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:24.075 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:24.075 [264/268] Linking target lib/librte_power.so.24.1 00:03:26.612 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:26.612 [266/268] Linking static target lib/librte_vhost.a 00:03:27.543 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.543 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:27.543 INFO: autodetecting backend as ninja 00:03:27.543 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:28.477 CC lib/ut/ut.o 00:03:28.477 CC lib/log/log.o 00:03:28.477 CC lib/log/log_flags.o 00:03:28.477 CC lib/ut_mock/mock.o 00:03:28.477 CC lib/log/log_deprecated.o 00:03:28.735 LIB libspdk_ut_mock.a 00:03:28.735 LIB libspdk_ut.a 00:03:28.735 LIB libspdk_log.a 00:03:28.735 SO libspdk_ut.so.2.0 00:03:28.735 SO libspdk_ut_mock.so.6.0 00:03:28.735 SO libspdk_log.so.7.0 00:03:28.735 SYMLINK libspdk_ut.so 00:03:28.735 SYMLINK libspdk_ut_mock.so 00:03:28.735 SYMLINK libspdk_log.so 00:03:28.992 CC lib/dma/dma.o 00:03:28.992 CC lib/ioat/ioat.o 00:03:28.992 CC lib/util/base64.o 00:03:28.992 CXX lib/trace_parser/trace.o 00:03:28.992 CC lib/util/bit_array.o 00:03:28.992 CC lib/util/cpuset.o 00:03:28.992 CC lib/util/crc16.o 00:03:28.992 CC lib/util/crc32.o 00:03:28.992 CC lib/util/crc32c.o 00:03:28.992 CC lib/util/crc32_ieee.o 00:03:28.992 CC lib/util/crc64.o 00:03:28.992 CC lib/util/dif.o 00:03:28.992 CC lib/util/fd.o 00:03:28.992 CC lib/util/file.o 00:03:28.992 CC lib/util/hexlify.o 00:03:28.992 CC lib/util/iov.o 00:03:28.992 CC lib/util/math.o 00:03:28.992 CC lib/util/pipe.o 00:03:28.992 CC lib/util/strerror_tls.o 00:03:28.992 CC lib/util/string.o 00:03:28.992 CC lib/util/uuid.o 00:03:28.992 CC lib/util/fd_group.o 00:03:28.992 CC lib/util/xor.o 00:03:28.992 CC lib/util/zipf.o 00:03:28.992 CC lib/vfio_user/host/vfio_user_pci.o 00:03:28.992 CC lib/vfio_user/host/vfio_user.o 00:03:29.250 LIB libspdk_dma.a 00:03:29.250 SO libspdk_dma.so.4.0 00:03:29.250 SYMLINK libspdk_dma.so 00:03:29.250 LIB libspdk_ioat.a 00:03:29.250 SO libspdk_ioat.so.7.0 00:03:29.250 LIB libspdk_vfio_user.a 00:03:29.250 SYMLINK libspdk_ioat.so 00:03:29.250 SO libspdk_vfio_user.so.5.0 00:03:29.507 SYMLINK libspdk_vfio_user.so 00:03:29.507 LIB libspdk_util.a 00:03:29.507 SO libspdk_util.so.9.1 00:03:29.765 SYMLINK libspdk_util.so 00:03:29.765 CC lib/json/json_parse.o 00:03:29.765 CC lib/env_dpdk/env.o 00:03:29.765 CC lib/conf/conf.o 00:03:29.765 CC lib/vmd/vmd.o 00:03:29.765 CC lib/json/json_util.o 00:03:29.765 CC lib/env_dpdk/memory.o 00:03:29.765 CC lib/vmd/led.o 00:03:29.765 CC lib/idxd/idxd.o 00:03:29.765 CC lib/json/json_write.o 00:03:29.765 CC lib/env_dpdk/pci.o 00:03:29.765 CC lib/idxd/idxd_user.o 00:03:29.765 CC lib/rdma_utils/rdma_utils.o 00:03:29.765 CC lib/env_dpdk/init.o 00:03:29.765 CC lib/rdma_provider/common.o 00:03:29.765 CC lib/idxd/idxd_kernel.o 00:03:30.024 CC lib/env_dpdk/threads.o 00:03:30.024 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:30.024 CC lib/env_dpdk/pci_ioat.o 00:03:30.024 CC lib/env_dpdk/pci_virtio.o 00:03:30.024 CC lib/env_dpdk/pci_vmd.o 00:03:30.024 CC lib/env_dpdk/pci_idxd.o 00:03:30.024 CC lib/env_dpdk/pci_event.o 00:03:30.024 CC lib/env_dpdk/sigbus_handler.o 00:03:30.024 CC lib/env_dpdk/pci_dpdk.o 00:03:30.024 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:30.024 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:30.024 LIB libspdk_trace_parser.a 00:03:30.024 SO libspdk_trace_parser.so.5.0 00:03:30.024 SYMLINK libspdk_trace_parser.so 00:03:30.024 LIB libspdk_conf.a 00:03:30.282 LIB libspdk_rdma_provider.a 00:03:30.282 SO libspdk_conf.so.6.0 00:03:30.282 SO libspdk_rdma_provider.so.6.0 00:03:30.282 LIB libspdk_rdma_utils.a 00:03:30.282 LIB libspdk_json.a 00:03:30.282 SYMLINK libspdk_conf.so 00:03:30.282 SO libspdk_rdma_utils.so.1.0 00:03:30.282 SYMLINK libspdk_rdma_provider.so 00:03:30.282 SO libspdk_json.so.6.0 00:03:30.282 SYMLINK libspdk_rdma_utils.so 00:03:30.282 SYMLINK libspdk_json.so 00:03:30.540 CC lib/jsonrpc/jsonrpc_server.o 00:03:30.540 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:30.540 CC lib/jsonrpc/jsonrpc_client.o 00:03:30.540 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:30.540 LIB libspdk_idxd.a 00:03:30.540 SO libspdk_idxd.so.12.0 00:03:30.540 SYMLINK libspdk_idxd.so 00:03:30.540 LIB libspdk_vmd.a 00:03:30.540 SO libspdk_vmd.so.6.0 00:03:30.540 SYMLINK libspdk_vmd.so 00:03:30.797 LIB libspdk_jsonrpc.a 00:03:30.797 SO libspdk_jsonrpc.so.6.0 00:03:30.797 SYMLINK libspdk_jsonrpc.so 00:03:31.054 CC lib/rpc/rpc.o 00:03:31.312 LIB libspdk_rpc.a 00:03:31.312 SO libspdk_rpc.so.6.0 00:03:31.312 SYMLINK libspdk_rpc.so 00:03:31.569 CC lib/notify/notify.o 00:03:31.569 CC lib/notify/notify_rpc.o 00:03:31.569 CC lib/keyring/keyring.o 00:03:31.569 CC lib/trace/trace.o 00:03:31.569 CC lib/keyring/keyring_rpc.o 00:03:31.569 CC lib/trace/trace_flags.o 00:03:31.569 CC lib/trace/trace_rpc.o 00:03:31.569 LIB libspdk_notify.a 00:03:31.569 SO libspdk_notify.so.6.0 00:03:31.828 LIB libspdk_keyring.a 00:03:31.828 SYMLINK libspdk_notify.so 00:03:31.828 LIB libspdk_trace.a 00:03:31.828 SO libspdk_keyring.so.1.0 00:03:31.828 SO libspdk_trace.so.10.0 00:03:31.828 SYMLINK libspdk_keyring.so 00:03:31.828 SYMLINK libspdk_trace.so 00:03:31.828 LIB libspdk_env_dpdk.a 00:03:32.087 SO libspdk_env_dpdk.so.14.1 00:03:32.087 CC lib/thread/thread.o 00:03:32.087 CC lib/thread/iobuf.o 00:03:32.087 CC lib/sock/sock.o 00:03:32.087 CC lib/sock/sock_rpc.o 00:03:32.087 SYMLINK libspdk_env_dpdk.so 00:03:32.345 LIB libspdk_sock.a 00:03:32.345 SO libspdk_sock.so.10.0 00:03:32.345 SYMLINK libspdk_sock.so 00:03:32.604 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:32.604 CC lib/nvme/nvme_ctrlr.o 00:03:32.604 CC lib/nvme/nvme_fabric.o 00:03:32.604 CC lib/nvme/nvme_ns_cmd.o 00:03:32.604 CC lib/nvme/nvme_ns.o 00:03:32.604 CC lib/nvme/nvme_pcie_common.o 00:03:32.604 CC lib/nvme/nvme_pcie.o 00:03:32.604 CC lib/nvme/nvme_qpair.o 00:03:32.604 CC lib/nvme/nvme.o 00:03:32.604 CC lib/nvme/nvme_quirks.o 00:03:32.604 CC lib/nvme/nvme_transport.o 00:03:32.604 CC lib/nvme/nvme_discovery.o 00:03:32.604 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:32.604 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:32.604 CC lib/nvme/nvme_tcp.o 00:03:32.604 CC lib/nvme/nvme_opal.o 00:03:32.604 CC lib/nvme/nvme_io_msg.o 00:03:32.604 CC lib/nvme/nvme_poll_group.o 00:03:32.604 CC lib/nvme/nvme_zns.o 00:03:32.604 CC lib/nvme/nvme_stubs.o 00:03:32.604 CC lib/nvme/nvme_auth.o 00:03:32.604 CC lib/nvme/nvme_cuse.o 00:03:32.604 CC lib/nvme/nvme_vfio_user.o 00:03:32.604 CC lib/nvme/nvme_rdma.o 00:03:33.537 LIB libspdk_thread.a 00:03:33.537 SO libspdk_thread.so.10.1 00:03:33.537 SYMLINK libspdk_thread.so 00:03:33.795 CC lib/blob/blobstore.o 00:03:33.795 CC lib/vfu_tgt/tgt_endpoint.o 00:03:33.795 CC lib/virtio/virtio.o 00:03:33.795 CC lib/accel/accel.o 00:03:33.795 CC lib/vfu_tgt/tgt_rpc.o 00:03:33.795 CC lib/init/json_config.o 00:03:33.795 CC lib/virtio/virtio_vhost_user.o 00:03:33.795 CC lib/blob/request.o 00:03:33.795 CC lib/accel/accel_rpc.o 00:03:33.795 CC lib/init/subsystem.o 00:03:33.795 CC lib/virtio/virtio_vfio_user.o 00:03:33.795 CC lib/blob/zeroes.o 00:03:33.795 CC lib/accel/accel_sw.o 00:03:33.795 CC lib/init/subsystem_rpc.o 00:03:33.795 CC lib/virtio/virtio_pci.o 00:03:33.795 CC lib/blob/blob_bs_dev.o 00:03:33.795 CC lib/init/rpc.o 00:03:34.052 LIB libspdk_init.a 00:03:34.052 SO libspdk_init.so.5.0 00:03:34.052 LIB libspdk_vfu_tgt.a 00:03:34.052 LIB libspdk_virtio.a 00:03:34.052 SYMLINK libspdk_init.so 00:03:34.052 SO libspdk_vfu_tgt.so.3.0 00:03:34.052 SO libspdk_virtio.so.7.0 00:03:34.309 SYMLINK libspdk_vfu_tgt.so 00:03:34.309 SYMLINK libspdk_virtio.so 00:03:34.309 CC lib/event/app.o 00:03:34.309 CC lib/event/reactor.o 00:03:34.309 CC lib/event/log_rpc.o 00:03:34.309 CC lib/event/app_rpc.o 00:03:34.309 CC lib/event/scheduler_static.o 00:03:34.873 LIB libspdk_event.a 00:03:34.873 SO libspdk_event.so.14.0 00:03:34.873 SYMLINK libspdk_event.so 00:03:34.873 LIB libspdk_accel.a 00:03:34.873 SO libspdk_accel.so.15.1 00:03:34.873 SYMLINK libspdk_accel.so 00:03:35.130 LIB libspdk_nvme.a 00:03:35.130 CC lib/bdev/bdev.o 00:03:35.130 CC lib/bdev/bdev_rpc.o 00:03:35.130 CC lib/bdev/bdev_zone.o 00:03:35.130 CC lib/bdev/part.o 00:03:35.130 CC lib/bdev/scsi_nvme.o 00:03:35.130 SO libspdk_nvme.so.13.1 00:03:35.419 SYMLINK libspdk_nvme.so 00:03:36.815 LIB libspdk_blob.a 00:03:36.815 SO libspdk_blob.so.11.0 00:03:36.815 SYMLINK libspdk_blob.so 00:03:37.072 CC lib/lvol/lvol.o 00:03:37.072 CC lib/blobfs/blobfs.o 00:03:37.072 CC lib/blobfs/tree.o 00:03:37.638 LIB libspdk_bdev.a 00:03:37.638 SO libspdk_bdev.so.15.1 00:03:37.901 LIB libspdk_blobfs.a 00:03:37.901 SYMLINK libspdk_bdev.so 00:03:37.901 SO libspdk_blobfs.so.10.0 00:03:37.901 SYMLINK libspdk_blobfs.so 00:03:37.901 LIB libspdk_lvol.a 00:03:37.901 SO libspdk_lvol.so.10.0 00:03:37.901 CC lib/scsi/dev.o 00:03:37.901 CC lib/ublk/ublk.o 00:03:37.901 CC lib/nbd/nbd.o 00:03:37.901 CC lib/nbd/nbd_rpc.o 00:03:37.901 CC lib/scsi/lun.o 00:03:37.901 CC lib/ublk/ublk_rpc.o 00:03:37.901 CC lib/nvmf/ctrlr.o 00:03:37.901 CC lib/ftl/ftl_core.o 00:03:37.901 CC lib/scsi/port.o 00:03:37.901 CC lib/nvmf/ctrlr_discovery.o 00:03:37.901 CC lib/ftl/ftl_init.o 00:03:37.901 CC lib/scsi/scsi.o 00:03:37.901 CC lib/nvmf/ctrlr_bdev.o 00:03:37.901 CC lib/ftl/ftl_layout.o 00:03:37.901 CC lib/scsi/scsi_bdev.o 00:03:37.901 CC lib/nvmf/subsystem.o 00:03:37.901 CC lib/ftl/ftl_debug.o 00:03:37.901 CC lib/scsi/scsi_pr.o 00:03:37.901 CC lib/nvmf/nvmf.o 00:03:37.901 CC lib/ftl/ftl_io.o 00:03:37.901 SYMLINK libspdk_lvol.so 00:03:37.901 CC lib/ftl/ftl_sb.o 00:03:37.901 CC lib/nvmf/nvmf_rpc.o 00:03:37.901 CC lib/nvmf/transport.o 00:03:37.901 CC lib/ftl/ftl_l2p.o 00:03:37.901 CC lib/scsi/scsi_rpc.o 00:03:37.901 CC lib/ftl/ftl_l2p_flat.o 00:03:37.901 CC lib/scsi/task.o 00:03:37.901 CC lib/nvmf/tcp.o 00:03:37.901 CC lib/nvmf/stubs.o 00:03:37.901 CC lib/ftl/ftl_nv_cache.o 00:03:37.901 CC lib/ftl/ftl_band.o 00:03:37.901 CC lib/nvmf/mdns_server.o 00:03:37.901 CC lib/ftl/ftl_band_ops.o 00:03:37.901 CC lib/nvmf/vfio_user.o 00:03:37.901 CC lib/ftl/ftl_writer.o 00:03:37.901 CC lib/nvmf/rdma.o 00:03:37.901 CC lib/ftl/ftl_rq.o 00:03:37.901 CC lib/nvmf/auth.o 00:03:37.901 CC lib/ftl/ftl_reloc.o 00:03:37.901 CC lib/ftl/ftl_l2p_cache.o 00:03:37.901 CC lib/ftl/ftl_p2l.o 00:03:37.901 CC lib/ftl/mngt/ftl_mngt.o 00:03:37.901 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:37.901 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:37.901 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:37.901 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:37.901 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:37.901 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:38.472 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:38.472 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:38.472 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:38.472 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:38.472 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:38.472 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:38.472 CC lib/ftl/utils/ftl_conf.o 00:03:38.472 CC lib/ftl/utils/ftl_md.o 00:03:38.472 CC lib/ftl/utils/ftl_mempool.o 00:03:38.472 CC lib/ftl/utils/ftl_bitmap.o 00:03:38.472 CC lib/ftl/utils/ftl_property.o 00:03:38.472 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:38.472 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:38.472 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:38.472 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:38.472 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:38.472 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:38.472 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:38.472 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:38.472 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:38.732 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:38.732 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:38.732 CC lib/ftl/base/ftl_base_dev.o 00:03:38.732 CC lib/ftl/base/ftl_base_bdev.o 00:03:38.732 CC lib/ftl/ftl_trace.o 00:03:38.732 LIB libspdk_nbd.a 00:03:38.990 SO libspdk_nbd.so.7.0 00:03:38.990 LIB libspdk_scsi.a 00:03:38.990 SYMLINK libspdk_nbd.so 00:03:38.990 SO libspdk_scsi.so.9.0 00:03:38.990 SYMLINK libspdk_scsi.so 00:03:38.990 LIB libspdk_ublk.a 00:03:39.248 SO libspdk_ublk.so.3.0 00:03:39.248 SYMLINK libspdk_ublk.so 00:03:39.248 CC lib/vhost/vhost.o 00:03:39.248 CC lib/iscsi/conn.o 00:03:39.248 CC lib/vhost/vhost_rpc.o 00:03:39.248 CC lib/iscsi/init_grp.o 00:03:39.248 CC lib/vhost/vhost_scsi.o 00:03:39.248 CC lib/vhost/vhost_blk.o 00:03:39.248 CC lib/iscsi/iscsi.o 00:03:39.248 CC lib/vhost/rte_vhost_user.o 00:03:39.248 CC lib/iscsi/md5.o 00:03:39.248 CC lib/iscsi/param.o 00:03:39.248 CC lib/iscsi/portal_grp.o 00:03:39.248 CC lib/iscsi/tgt_node.o 00:03:39.248 CC lib/iscsi/iscsi_rpc.o 00:03:39.248 CC lib/iscsi/iscsi_subsystem.o 00:03:39.248 CC lib/iscsi/task.o 00:03:39.506 LIB libspdk_ftl.a 00:03:39.764 SO libspdk_ftl.so.9.0 00:03:40.022 SYMLINK libspdk_ftl.so 00:03:40.587 LIB libspdk_vhost.a 00:03:40.587 SO libspdk_vhost.so.8.0 00:03:40.587 LIB libspdk_nvmf.a 00:03:40.587 SYMLINK libspdk_vhost.so 00:03:40.587 SO libspdk_nvmf.so.19.0 00:03:40.587 LIB libspdk_iscsi.a 00:03:40.845 SO libspdk_iscsi.so.8.0 00:03:40.845 SYMLINK libspdk_nvmf.so 00:03:40.845 SYMLINK libspdk_iscsi.so 00:03:41.104 CC module/env_dpdk/env_dpdk_rpc.o 00:03:41.104 CC module/vfu_device/vfu_virtio.o 00:03:41.104 CC module/vfu_device/vfu_virtio_blk.o 00:03:41.104 CC module/vfu_device/vfu_virtio_scsi.o 00:03:41.104 CC module/vfu_device/vfu_virtio_rpc.o 00:03:41.361 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:41.361 CC module/accel/dsa/accel_dsa.o 00:03:41.361 CC module/keyring/file/keyring.o 00:03:41.361 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:41.361 CC module/blob/bdev/blob_bdev.o 00:03:41.361 CC module/keyring/file/keyring_rpc.o 00:03:41.361 CC module/accel/dsa/accel_dsa_rpc.o 00:03:41.361 CC module/accel/iaa/accel_iaa.o 00:03:41.361 CC module/accel/ioat/accel_ioat.o 00:03:41.361 CC module/accel/ioat/accel_ioat_rpc.o 00:03:41.361 CC module/accel/error/accel_error.o 00:03:41.361 CC module/accel/iaa/accel_iaa_rpc.o 00:03:41.361 CC module/sock/posix/posix.o 00:03:41.361 CC module/keyring/linux/keyring.o 00:03:41.361 CC module/keyring/linux/keyring_rpc.o 00:03:41.361 CC module/scheduler/gscheduler/gscheduler.o 00:03:41.361 CC module/accel/error/accel_error_rpc.o 00:03:41.361 LIB libspdk_env_dpdk_rpc.a 00:03:41.361 SO libspdk_env_dpdk_rpc.so.6.0 00:03:41.361 SYMLINK libspdk_env_dpdk_rpc.so 00:03:41.361 LIB libspdk_keyring_linux.a 00:03:41.361 LIB libspdk_keyring_file.a 00:03:41.361 LIB libspdk_scheduler_dpdk_governor.a 00:03:41.361 LIB libspdk_scheduler_gscheduler.a 00:03:41.361 SO libspdk_keyring_file.so.1.0 00:03:41.361 SO libspdk_keyring_linux.so.1.0 00:03:41.361 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:41.361 SO libspdk_scheduler_gscheduler.so.4.0 00:03:41.361 LIB libspdk_accel_error.a 00:03:41.361 LIB libspdk_accel_ioat.a 00:03:41.361 LIB libspdk_scheduler_dynamic.a 00:03:41.618 LIB libspdk_accel_iaa.a 00:03:41.618 SO libspdk_accel_error.so.2.0 00:03:41.618 SO libspdk_scheduler_dynamic.so.4.0 00:03:41.618 SO libspdk_accel_ioat.so.6.0 00:03:41.618 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:41.618 SYMLINK libspdk_keyring_file.so 00:03:41.618 SYMLINK libspdk_keyring_linux.so 00:03:41.618 SYMLINK libspdk_scheduler_gscheduler.so 00:03:41.618 SO libspdk_accel_iaa.so.3.0 00:03:41.618 LIB libspdk_accel_dsa.a 00:03:41.618 SYMLINK libspdk_accel_error.so 00:03:41.618 SYMLINK libspdk_scheduler_dynamic.so 00:03:41.618 LIB libspdk_blob_bdev.a 00:03:41.618 SYMLINK libspdk_accel_ioat.so 00:03:41.618 SO libspdk_accel_dsa.so.5.0 00:03:41.618 SYMLINK libspdk_accel_iaa.so 00:03:41.618 SO libspdk_blob_bdev.so.11.0 00:03:41.618 SYMLINK libspdk_blob_bdev.so 00:03:41.618 SYMLINK libspdk_accel_dsa.so 00:03:41.882 LIB libspdk_vfu_device.a 00:03:41.882 SO libspdk_vfu_device.so.3.0 00:03:41.882 CC module/bdev/lvol/vbdev_lvol.o 00:03:41.882 CC module/blobfs/bdev/blobfs_bdev.o 00:03:41.882 CC module/bdev/delay/vbdev_delay.o 00:03:41.882 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:41.882 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:41.882 CC module/bdev/malloc/bdev_malloc.o 00:03:41.882 CC module/bdev/gpt/gpt.o 00:03:41.882 CC module/bdev/split/vbdev_split.o 00:03:41.882 CC module/bdev/gpt/vbdev_gpt.o 00:03:41.882 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:41.882 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:41.882 CC module/bdev/passthru/vbdev_passthru.o 00:03:41.882 CC module/bdev/nvme/bdev_nvme.o 00:03:41.882 CC module/bdev/null/bdev_null.o 00:03:41.882 CC module/bdev/split/vbdev_split_rpc.o 00:03:41.882 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:41.882 CC module/bdev/error/vbdev_error.o 00:03:41.882 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:41.882 CC module/bdev/null/bdev_null_rpc.o 00:03:41.882 CC module/bdev/nvme/bdev_mdns_client.o 00:03:41.882 CC module/bdev/nvme/nvme_rpc.o 00:03:41.882 CC module/bdev/error/vbdev_error_rpc.o 00:03:41.882 CC module/bdev/nvme/vbdev_opal.o 00:03:41.882 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:41.882 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:41.882 CC module/bdev/ftl/bdev_ftl.o 00:03:41.882 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:41.882 CC module/bdev/aio/bdev_aio.o 00:03:41.882 CC module/bdev/raid/bdev_raid.o 00:03:41.882 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:41.882 CC module/bdev/iscsi/bdev_iscsi.o 00:03:41.882 CC module/bdev/aio/bdev_aio_rpc.o 00:03:41.882 CC module/bdev/raid/bdev_raid_rpc.o 00:03:41.882 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:41.882 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:41.882 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:41.882 CC module/bdev/raid/bdev_raid_sb.o 00:03:41.882 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:41.882 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:41.882 CC module/bdev/raid/raid0.o 00:03:41.882 CC module/bdev/raid/raid1.o 00:03:41.882 CC module/bdev/raid/concat.o 00:03:41.882 SYMLINK libspdk_vfu_device.so 00:03:42.140 LIB libspdk_sock_posix.a 00:03:42.140 SO libspdk_sock_posix.so.6.0 00:03:42.140 LIB libspdk_bdev_error.a 00:03:42.140 LIB libspdk_blobfs_bdev.a 00:03:42.398 SO libspdk_bdev_error.so.6.0 00:03:42.398 SO libspdk_blobfs_bdev.so.6.0 00:03:42.398 LIB libspdk_bdev_split.a 00:03:42.398 SO libspdk_bdev_split.so.6.0 00:03:42.398 SYMLINK libspdk_bdev_error.so 00:03:42.398 SYMLINK libspdk_sock_posix.so 00:03:42.398 SYMLINK libspdk_blobfs_bdev.so 00:03:42.398 SYMLINK libspdk_bdev_split.so 00:03:42.398 LIB libspdk_bdev_gpt.a 00:03:42.398 LIB libspdk_bdev_null.a 00:03:42.398 SO libspdk_bdev_gpt.so.6.0 00:03:42.398 SO libspdk_bdev_null.so.6.0 00:03:42.398 LIB libspdk_bdev_ftl.a 00:03:42.398 LIB libspdk_bdev_zone_block.a 00:03:42.398 LIB libspdk_bdev_passthru.a 00:03:42.398 SO libspdk_bdev_ftl.so.6.0 00:03:42.398 SO libspdk_bdev_zone_block.so.6.0 00:03:42.398 SO libspdk_bdev_passthru.so.6.0 00:03:42.398 LIB libspdk_bdev_aio.a 00:03:42.398 SYMLINK libspdk_bdev_null.so 00:03:42.398 SYMLINK libspdk_bdev_gpt.so 00:03:42.398 LIB libspdk_bdev_malloc.a 00:03:42.398 LIB libspdk_bdev_iscsi.a 00:03:42.398 SO libspdk_bdev_aio.so.6.0 00:03:42.398 LIB libspdk_bdev_virtio.a 00:03:42.656 SO libspdk_bdev_malloc.so.6.0 00:03:42.656 LIB libspdk_bdev_delay.a 00:03:42.656 SYMLINK libspdk_bdev_zone_block.so 00:03:42.656 SYMLINK libspdk_bdev_ftl.so 00:03:42.656 SO libspdk_bdev_iscsi.so.6.0 00:03:42.656 SYMLINK libspdk_bdev_passthru.so 00:03:42.656 SO libspdk_bdev_virtio.so.6.0 00:03:42.656 SO libspdk_bdev_delay.so.6.0 00:03:42.656 SYMLINK libspdk_bdev_aio.so 00:03:42.656 SYMLINK libspdk_bdev_malloc.so 00:03:42.656 SYMLINK libspdk_bdev_iscsi.so 00:03:42.656 SYMLINK libspdk_bdev_virtio.so 00:03:42.656 SYMLINK libspdk_bdev_delay.so 00:03:42.656 LIB libspdk_bdev_lvol.a 00:03:42.656 SO libspdk_bdev_lvol.so.6.0 00:03:42.656 SYMLINK libspdk_bdev_lvol.so 00:03:43.223 LIB libspdk_bdev_raid.a 00:03:43.223 SO libspdk_bdev_raid.so.6.0 00:03:43.223 SYMLINK libspdk_bdev_raid.so 00:03:44.158 LIB libspdk_bdev_nvme.a 00:03:44.415 SO libspdk_bdev_nvme.so.7.0 00:03:44.415 SYMLINK libspdk_bdev_nvme.so 00:03:44.672 CC module/event/subsystems/vmd/vmd.o 00:03:44.672 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:44.672 CC module/event/subsystems/iobuf/iobuf.o 00:03:44.672 CC module/event/subsystems/scheduler/scheduler.o 00:03:44.672 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:44.672 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:44.672 CC module/event/subsystems/sock/sock.o 00:03:44.672 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:44.672 CC module/event/subsystems/keyring/keyring.o 00:03:44.929 LIB libspdk_event_keyring.a 00:03:44.929 LIB libspdk_event_vhost_blk.a 00:03:44.929 LIB libspdk_event_vfu_tgt.a 00:03:44.929 LIB libspdk_event_scheduler.a 00:03:44.930 LIB libspdk_event_sock.a 00:03:44.930 LIB libspdk_event_vmd.a 00:03:44.930 SO libspdk_event_keyring.so.1.0 00:03:44.930 LIB libspdk_event_iobuf.a 00:03:44.930 SO libspdk_event_vhost_blk.so.3.0 00:03:44.930 SO libspdk_event_sock.so.5.0 00:03:44.930 SO libspdk_event_scheduler.so.4.0 00:03:44.930 SO libspdk_event_vfu_tgt.so.3.0 00:03:44.930 SO libspdk_event_vmd.so.6.0 00:03:44.930 SO libspdk_event_iobuf.so.3.0 00:03:44.930 SYMLINK libspdk_event_keyring.so 00:03:44.930 SYMLINK libspdk_event_vhost_blk.so 00:03:44.930 SYMLINK libspdk_event_vfu_tgt.so 00:03:44.930 SYMLINK libspdk_event_scheduler.so 00:03:44.930 SYMLINK libspdk_event_sock.so 00:03:44.930 SYMLINK libspdk_event_vmd.so 00:03:44.930 SYMLINK libspdk_event_iobuf.so 00:03:45.188 CC module/event/subsystems/accel/accel.o 00:03:45.445 LIB libspdk_event_accel.a 00:03:45.445 SO libspdk_event_accel.so.6.0 00:03:45.445 SYMLINK libspdk_event_accel.so 00:03:45.704 CC module/event/subsystems/bdev/bdev.o 00:03:45.704 LIB libspdk_event_bdev.a 00:03:45.704 SO libspdk_event_bdev.so.6.0 00:03:45.961 SYMLINK libspdk_event_bdev.so 00:03:45.961 CC module/event/subsystems/nbd/nbd.o 00:03:45.961 CC module/event/subsystems/scsi/scsi.o 00:03:45.961 CC module/event/subsystems/ublk/ublk.o 00:03:45.961 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:45.961 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:46.217 LIB libspdk_event_nbd.a 00:03:46.217 LIB libspdk_event_ublk.a 00:03:46.217 LIB libspdk_event_scsi.a 00:03:46.217 SO libspdk_event_nbd.so.6.0 00:03:46.217 SO libspdk_event_ublk.so.3.0 00:03:46.217 SO libspdk_event_scsi.so.6.0 00:03:46.217 SYMLINK libspdk_event_ublk.so 00:03:46.217 SYMLINK libspdk_event_nbd.so 00:03:46.217 SYMLINK libspdk_event_scsi.so 00:03:46.217 LIB libspdk_event_nvmf.a 00:03:46.217 SO libspdk_event_nvmf.so.6.0 00:03:46.217 SYMLINK libspdk_event_nvmf.so 00:03:46.474 CC module/event/subsystems/iscsi/iscsi.o 00:03:46.474 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:46.474 LIB libspdk_event_vhost_scsi.a 00:03:46.474 LIB libspdk_event_iscsi.a 00:03:46.474 SO libspdk_event_vhost_scsi.so.3.0 00:03:46.732 SO libspdk_event_iscsi.so.6.0 00:03:46.732 SYMLINK libspdk_event_vhost_scsi.so 00:03:46.732 SYMLINK libspdk_event_iscsi.so 00:03:46.732 SO libspdk.so.6.0 00:03:46.732 SYMLINK libspdk.so 00:03:46.997 CXX app/trace/trace.o 00:03:46.997 CC app/trace_record/trace_record.o 00:03:46.997 CC app/spdk_lspci/spdk_lspci.o 00:03:46.997 CC app/spdk_top/spdk_top.o 00:03:46.997 CC app/spdk_nvme_perf/perf.o 00:03:46.997 CC app/spdk_nvme_identify/identify.o 00:03:46.997 CC app/spdk_nvme_discover/discovery_aer.o 00:03:46.997 CC test/rpc_client/rpc_client_test.o 00:03:46.997 TEST_HEADER include/spdk/accel.h 00:03:46.997 TEST_HEADER include/spdk/accel_module.h 00:03:46.997 TEST_HEADER include/spdk/assert.h 00:03:46.997 TEST_HEADER include/spdk/barrier.h 00:03:46.997 TEST_HEADER include/spdk/base64.h 00:03:46.997 TEST_HEADER include/spdk/bdev.h 00:03:46.997 TEST_HEADER include/spdk/bdev_module.h 00:03:46.997 TEST_HEADER include/spdk/bdev_zone.h 00:03:46.997 TEST_HEADER include/spdk/bit_array.h 00:03:46.997 TEST_HEADER include/spdk/bit_pool.h 00:03:46.997 TEST_HEADER include/spdk/blob_bdev.h 00:03:46.997 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:46.997 TEST_HEADER include/spdk/blobfs.h 00:03:46.997 TEST_HEADER include/spdk/blob.h 00:03:46.997 TEST_HEADER include/spdk/conf.h 00:03:46.997 TEST_HEADER include/spdk/config.h 00:03:46.997 TEST_HEADER include/spdk/cpuset.h 00:03:46.997 TEST_HEADER include/spdk/crc16.h 00:03:46.997 TEST_HEADER include/spdk/crc32.h 00:03:46.997 TEST_HEADER include/spdk/crc64.h 00:03:46.997 TEST_HEADER include/spdk/dif.h 00:03:46.997 TEST_HEADER include/spdk/dma.h 00:03:46.997 TEST_HEADER include/spdk/endian.h 00:03:46.997 TEST_HEADER include/spdk/env_dpdk.h 00:03:46.997 TEST_HEADER include/spdk/env.h 00:03:46.997 TEST_HEADER include/spdk/event.h 00:03:46.997 TEST_HEADER include/spdk/fd_group.h 00:03:46.997 TEST_HEADER include/spdk/fd.h 00:03:46.997 TEST_HEADER include/spdk/file.h 00:03:46.997 TEST_HEADER include/spdk/ftl.h 00:03:46.997 TEST_HEADER include/spdk/gpt_spec.h 00:03:46.997 TEST_HEADER include/spdk/hexlify.h 00:03:46.997 TEST_HEADER include/spdk/histogram_data.h 00:03:46.997 TEST_HEADER include/spdk/idxd.h 00:03:46.997 TEST_HEADER include/spdk/idxd_spec.h 00:03:46.997 TEST_HEADER include/spdk/init.h 00:03:46.997 TEST_HEADER include/spdk/ioat.h 00:03:46.997 TEST_HEADER include/spdk/ioat_spec.h 00:03:46.997 TEST_HEADER include/spdk/iscsi_spec.h 00:03:46.997 TEST_HEADER include/spdk/json.h 00:03:46.997 TEST_HEADER include/spdk/jsonrpc.h 00:03:46.997 TEST_HEADER include/spdk/keyring_module.h 00:03:46.997 TEST_HEADER include/spdk/keyring.h 00:03:46.997 TEST_HEADER include/spdk/likely.h 00:03:46.997 TEST_HEADER include/spdk/log.h 00:03:46.997 TEST_HEADER include/spdk/lvol.h 00:03:46.997 TEST_HEADER include/spdk/memory.h 00:03:46.997 TEST_HEADER include/spdk/mmio.h 00:03:46.997 TEST_HEADER include/spdk/nbd.h 00:03:46.997 TEST_HEADER include/spdk/notify.h 00:03:46.997 TEST_HEADER include/spdk/nvme_intel.h 00:03:46.997 TEST_HEADER include/spdk/nvme.h 00:03:46.997 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:46.997 TEST_HEADER include/spdk/nvme_spec.h 00:03:46.997 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:46.997 TEST_HEADER include/spdk/nvme_zns.h 00:03:46.997 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:46.997 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:46.997 TEST_HEADER include/spdk/nvmf.h 00:03:46.997 TEST_HEADER include/spdk/nvmf_spec.h 00:03:46.997 TEST_HEADER include/spdk/nvmf_transport.h 00:03:46.997 TEST_HEADER include/spdk/opal.h 00:03:46.997 TEST_HEADER include/spdk/opal_spec.h 00:03:46.997 TEST_HEADER include/spdk/pci_ids.h 00:03:46.997 TEST_HEADER include/spdk/pipe.h 00:03:46.997 TEST_HEADER include/spdk/queue.h 00:03:46.997 TEST_HEADER include/spdk/reduce.h 00:03:46.997 TEST_HEADER include/spdk/rpc.h 00:03:46.997 TEST_HEADER include/spdk/scheduler.h 00:03:46.997 TEST_HEADER include/spdk/scsi.h 00:03:46.997 TEST_HEADER include/spdk/scsi_spec.h 00:03:46.997 TEST_HEADER include/spdk/sock.h 00:03:46.997 TEST_HEADER include/spdk/stdinc.h 00:03:46.997 TEST_HEADER include/spdk/string.h 00:03:46.997 TEST_HEADER include/spdk/thread.h 00:03:46.997 TEST_HEADER include/spdk/trace.h 00:03:46.997 TEST_HEADER include/spdk/trace_parser.h 00:03:46.997 TEST_HEADER include/spdk/tree.h 00:03:46.997 TEST_HEADER include/spdk/ublk.h 00:03:46.997 TEST_HEADER include/spdk/uuid.h 00:03:46.997 TEST_HEADER include/spdk/util.h 00:03:46.997 TEST_HEADER include/spdk/version.h 00:03:46.997 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:46.997 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:46.997 TEST_HEADER include/spdk/vhost.h 00:03:46.997 TEST_HEADER include/spdk/vmd.h 00:03:46.997 TEST_HEADER include/spdk/xor.h 00:03:46.997 TEST_HEADER include/spdk/zipf.h 00:03:46.997 CXX test/cpp_headers/accel.o 00:03:46.997 CXX test/cpp_headers/accel_module.o 00:03:46.997 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:46.997 CXX test/cpp_headers/assert.o 00:03:46.997 CXX test/cpp_headers/barrier.o 00:03:46.997 CXX test/cpp_headers/base64.o 00:03:46.997 CXX test/cpp_headers/bdev.o 00:03:46.997 CXX test/cpp_headers/bdev_module.o 00:03:46.997 CXX test/cpp_headers/bdev_zone.o 00:03:46.998 CC app/iscsi_tgt/iscsi_tgt.o 00:03:46.998 CXX test/cpp_headers/bit_array.o 00:03:46.998 CXX test/cpp_headers/bit_pool.o 00:03:46.998 CC app/spdk_dd/spdk_dd.o 00:03:46.998 CXX test/cpp_headers/blob_bdev.o 00:03:46.998 CXX test/cpp_headers/blobfs_bdev.o 00:03:46.998 CXX test/cpp_headers/blobfs.o 00:03:46.998 CC app/nvmf_tgt/nvmf_main.o 00:03:46.998 CXX test/cpp_headers/blob.o 00:03:46.998 CXX test/cpp_headers/conf.o 00:03:46.998 CXX test/cpp_headers/config.o 00:03:46.998 CXX test/cpp_headers/cpuset.o 00:03:46.998 CXX test/cpp_headers/crc16.o 00:03:46.998 CXX test/cpp_headers/crc32.o 00:03:46.998 CC test/env/vtophys/vtophys.o 00:03:46.998 CC test/app/jsoncat/jsoncat.o 00:03:46.998 CC examples/ioat/perf/perf.o 00:03:46.998 CC examples/ioat/verify/verify.o 00:03:46.998 CC test/thread/poller_perf/poller_perf.o 00:03:46.998 CC examples/util/zipf/zipf.o 00:03:46.998 CC app/spdk_tgt/spdk_tgt.o 00:03:46.998 CC test/app/stub/stub.o 00:03:46.998 CC test/env/pci/pci_ut.o 00:03:46.998 CC test/env/memory/memory_ut.o 00:03:46.998 CC app/fio/nvme/fio_plugin.o 00:03:46.998 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:46.998 CC test/app/histogram_perf/histogram_perf.o 00:03:47.255 CC test/dma/test_dma/test_dma.o 00:03:47.255 CC test/app/bdev_svc/bdev_svc.o 00:03:47.255 CC app/fio/bdev/fio_plugin.o 00:03:47.256 LINK spdk_lspci 00:03:47.256 CC test/env/mem_callbacks/mem_callbacks.o 00:03:47.256 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:47.522 LINK rpc_client_test 00:03:47.522 LINK jsoncat 00:03:47.522 LINK spdk_nvme_discover 00:03:47.522 LINK poller_perf 00:03:47.522 LINK vtophys 00:03:47.522 LINK zipf 00:03:47.522 LINK histogram_perf 00:03:47.522 LINK interrupt_tgt 00:03:47.522 LINK nvmf_tgt 00:03:47.522 CXX test/cpp_headers/crc64.o 00:03:47.522 CXX test/cpp_headers/dif.o 00:03:47.522 CXX test/cpp_headers/dma.o 00:03:47.522 CXX test/cpp_headers/endian.o 00:03:47.522 CXX test/cpp_headers/env_dpdk.o 00:03:47.522 CXX test/cpp_headers/env.o 00:03:47.522 CXX test/cpp_headers/event.o 00:03:47.522 CXX test/cpp_headers/fd_group.o 00:03:47.522 CXX test/cpp_headers/fd.o 00:03:47.522 LINK spdk_trace_record 00:03:47.522 LINK env_dpdk_post_init 00:03:47.522 CXX test/cpp_headers/file.o 00:03:47.522 LINK stub 00:03:47.522 CXX test/cpp_headers/ftl.o 00:03:47.522 LINK iscsi_tgt 00:03:47.522 CXX test/cpp_headers/gpt_spec.o 00:03:47.522 CXX test/cpp_headers/hexlify.o 00:03:47.522 CXX test/cpp_headers/histogram_data.o 00:03:47.522 CXX test/cpp_headers/idxd.o 00:03:47.522 LINK verify 00:03:47.522 CXX test/cpp_headers/idxd_spec.o 00:03:47.522 LINK bdev_svc 00:03:47.522 LINK ioat_perf 00:03:47.522 LINK spdk_tgt 00:03:47.522 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:47.522 CXX test/cpp_headers/init.o 00:03:47.522 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:47.784 CXX test/cpp_headers/ioat.o 00:03:47.784 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:47.784 CXX test/cpp_headers/ioat_spec.o 00:03:47.784 CXX test/cpp_headers/iscsi_spec.o 00:03:47.784 CXX test/cpp_headers/json.o 00:03:47.784 CXX test/cpp_headers/jsonrpc.o 00:03:47.784 CXX test/cpp_headers/keyring.o 00:03:47.784 LINK spdk_dd 00:03:47.784 CXX test/cpp_headers/keyring_module.o 00:03:47.784 LINK spdk_trace 00:03:47.784 CXX test/cpp_headers/likely.o 00:03:47.784 CXX test/cpp_headers/log.o 00:03:47.784 CXX test/cpp_headers/lvol.o 00:03:47.784 LINK pci_ut 00:03:47.784 CXX test/cpp_headers/memory.o 00:03:47.784 CXX test/cpp_headers/mmio.o 00:03:47.784 CXX test/cpp_headers/nbd.o 00:03:47.784 CXX test/cpp_headers/notify.o 00:03:47.784 CXX test/cpp_headers/nvme.o 00:03:47.784 CXX test/cpp_headers/nvme_intel.o 00:03:47.784 CXX test/cpp_headers/nvme_ocssd.o 00:03:47.784 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:47.784 CXX test/cpp_headers/nvme_spec.o 00:03:48.046 LINK test_dma 00:03:48.046 CXX test/cpp_headers/nvme_zns.o 00:03:48.046 CXX test/cpp_headers/nvmf_cmd.o 00:03:48.046 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:48.046 CXX test/cpp_headers/nvmf.o 00:03:48.046 CXX test/cpp_headers/nvmf_spec.o 00:03:48.046 CXX test/cpp_headers/nvmf_transport.o 00:03:48.046 CXX test/cpp_headers/opal.o 00:03:48.046 CXX test/cpp_headers/opal_spec.o 00:03:48.046 CXX test/cpp_headers/pci_ids.o 00:03:48.046 CXX test/cpp_headers/pipe.o 00:03:48.046 CXX test/cpp_headers/queue.o 00:03:48.046 CXX test/cpp_headers/reduce.o 00:03:48.046 CC test/event/event_perf/event_perf.o 00:03:48.046 CC examples/sock/hello_world/hello_sock.o 00:03:48.046 LINK spdk_bdev 00:03:48.046 CC examples/vmd/lsvmd/lsvmd.o 00:03:48.046 CXX test/cpp_headers/rpc.o 00:03:48.046 LINK nvme_fuzz 00:03:48.306 CXX test/cpp_headers/scheduler.o 00:03:48.306 CXX test/cpp_headers/scsi.o 00:03:48.306 CXX test/cpp_headers/scsi_spec.o 00:03:48.306 LINK spdk_nvme 00:03:48.306 CC examples/idxd/perf/perf.o 00:03:48.306 CC test/event/reactor/reactor.o 00:03:48.306 CC examples/vmd/led/led.o 00:03:48.306 CC test/event/reactor_perf/reactor_perf.o 00:03:48.306 CXX test/cpp_headers/sock.o 00:03:48.306 CC examples/thread/thread/thread_ex.o 00:03:48.306 CC test/event/app_repeat/app_repeat.o 00:03:48.306 CXX test/cpp_headers/stdinc.o 00:03:48.306 CXX test/cpp_headers/string.o 00:03:48.306 CXX test/cpp_headers/thread.o 00:03:48.306 CXX test/cpp_headers/trace.o 00:03:48.306 CXX test/cpp_headers/trace_parser.o 00:03:48.306 CC test/event/scheduler/scheduler.o 00:03:48.306 CXX test/cpp_headers/tree.o 00:03:48.306 CXX test/cpp_headers/ublk.o 00:03:48.306 CXX test/cpp_headers/util.o 00:03:48.306 CXX test/cpp_headers/uuid.o 00:03:48.306 CXX test/cpp_headers/version.o 00:03:48.306 CXX test/cpp_headers/vfio_user_pci.o 00:03:48.306 CXX test/cpp_headers/vfio_user_spec.o 00:03:48.306 CXX test/cpp_headers/vhost.o 00:03:48.306 CXX test/cpp_headers/vmd.o 00:03:48.306 CXX test/cpp_headers/xor.o 00:03:48.306 CXX test/cpp_headers/zipf.o 00:03:48.306 CC app/vhost/vhost.o 00:03:48.571 LINK event_perf 00:03:48.571 LINK spdk_nvme_perf 00:03:48.571 LINK lsvmd 00:03:48.571 LINK vhost_fuzz 00:03:48.571 LINK mem_callbacks 00:03:48.571 LINK reactor 00:03:48.571 LINK led 00:03:48.571 LINK reactor_perf 00:03:48.571 LINK spdk_nvme_identify 00:03:48.571 LINK app_repeat 00:03:48.571 LINK hello_sock 00:03:48.571 LINK spdk_top 00:03:48.571 CC test/nvme/aer/aer.o 00:03:48.571 CC test/nvme/overhead/overhead.o 00:03:48.571 CC test/nvme/startup/startup.o 00:03:48.571 CC test/nvme/err_injection/err_injection.o 00:03:48.571 CC test/nvme/sgl/sgl.o 00:03:48.571 CC test/nvme/e2edp/nvme_dp.o 00:03:48.571 CC test/nvme/reset/reset.o 00:03:48.571 CC test/nvme/reserve/reserve.o 00:03:48.860 CC test/nvme/simple_copy/simple_copy.o 00:03:48.860 CC test/nvme/boot_partition/boot_partition.o 00:03:48.860 CC test/accel/dif/dif.o 00:03:48.860 CC test/nvme/connect_stress/connect_stress.o 00:03:48.860 CC test/blobfs/mkfs/mkfs.o 00:03:48.860 CC test/nvme/compliance/nvme_compliance.o 00:03:48.860 LINK thread 00:03:48.860 CC test/nvme/fused_ordering/fused_ordering.o 00:03:48.860 LINK vhost 00:03:48.860 LINK scheduler 00:03:48.860 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:48.860 CC test/lvol/esnap/esnap.o 00:03:48.860 CC test/nvme/fdp/fdp.o 00:03:48.860 CC test/nvme/cuse/cuse.o 00:03:48.860 LINK idxd_perf 00:03:48.860 LINK startup 00:03:49.125 LINK reserve 00:03:49.125 LINK simple_copy 00:03:49.125 LINK boot_partition 00:03:49.125 LINK fused_ordering 00:03:49.125 LINK doorbell_aers 00:03:49.125 LINK err_injection 00:03:49.125 LINK sgl 00:03:49.125 LINK connect_stress 00:03:49.125 LINK reset 00:03:49.125 LINK aer 00:03:49.125 LINK overhead 00:03:49.125 CC examples/nvme/hello_world/hello_world.o 00:03:49.125 CC examples/nvme/reconnect/reconnect.o 00:03:49.125 CC examples/nvme/hotplug/hotplug.o 00:03:49.125 CC examples/nvme/abort/abort.o 00:03:49.125 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:49.125 LINK mkfs 00:03:49.125 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:49.125 CC examples/nvme/arbitration/arbitration.o 00:03:49.125 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:49.125 LINK nvme_compliance 00:03:49.125 LINK nvme_dp 00:03:49.125 LINK memory_ut 00:03:49.383 CC examples/accel/perf/accel_perf.o 00:03:49.383 LINK dif 00:03:49.383 CC examples/blob/cli/blobcli.o 00:03:49.383 CC examples/blob/hello_world/hello_blob.o 00:03:49.383 LINK hello_world 00:03:49.383 LINK fdp 00:03:49.383 LINK cmb_copy 00:03:49.383 LINK pmr_persistence 00:03:49.383 LINK hotplug 00:03:49.383 LINK reconnect 00:03:49.640 LINK arbitration 00:03:49.640 LINK hello_blob 00:03:49.640 LINK abort 00:03:49.640 LINK nvme_manage 00:03:49.640 CC test/bdev/bdevio/bdevio.o 00:03:49.640 LINK accel_perf 00:03:49.898 LINK blobcli 00:03:50.155 LINK iscsi_fuzz 00:03:50.155 CC examples/bdev/hello_world/hello_bdev.o 00:03:50.155 CC examples/bdev/bdevperf/bdevperf.o 00:03:50.155 LINK bdevio 00:03:50.413 LINK hello_bdev 00:03:50.413 LINK cuse 00:03:50.979 LINK bdevperf 00:03:51.237 CC examples/nvmf/nvmf/nvmf.o 00:03:51.494 LINK nvmf 00:03:54.025 LINK esnap 00:03:54.025 00:03:54.025 real 0m48.999s 00:03:54.025 user 10m5.586s 00:03:54.025 sys 2m28.210s 00:03:54.025 15:45:20 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:54.025 15:45:20 make -- common/autotest_common.sh@10 -- $ set +x 00:03:54.025 ************************************ 00:03:54.025 END TEST make 00:03:54.025 ************************************ 00:03:54.283 15:45:20 -- common/autotest_common.sh@1142 -- $ return 0 00:03:54.283 15:45:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:54.283 15:45:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:54.283 15:45:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:54.283 15:45:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.283 15:45:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:54.283 15:45:20 -- pm/common@44 -- $ pid=944785 00:03:54.283 15:45:20 -- pm/common@50 -- $ kill -TERM 944785 00:03:54.283 15:45:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.283 15:45:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:54.283 15:45:20 -- pm/common@44 -- $ pid=944787 00:03:54.283 15:45:20 -- pm/common@50 -- $ kill -TERM 944787 00:03:54.283 15:45:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.283 15:45:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:54.283 15:45:20 -- pm/common@44 -- $ pid=944789 00:03:54.283 15:45:20 -- pm/common@50 -- $ kill -TERM 944789 00:03:54.283 15:45:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.283 15:45:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:54.283 15:45:20 -- pm/common@44 -- $ pid=944817 00:03:54.283 15:45:20 -- pm/common@50 -- $ sudo -E kill -TERM 944817 00:03:54.283 15:45:21 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:54.283 15:45:21 -- nvmf/common.sh@7 -- # uname -s 00:03:54.283 15:45:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:54.283 15:45:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:54.283 15:45:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:54.283 15:45:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:54.283 15:45:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:54.283 15:45:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:54.283 15:45:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:54.283 15:45:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:54.283 15:45:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:54.283 15:45:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:54.283 15:45:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:54.283 15:45:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:54.283 15:45:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:54.283 15:45:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:54.283 15:45:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:54.283 15:45:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:54.283 15:45:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:54.283 15:45:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:54.283 15:45:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.283 15:45:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.283 15:45:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.283 15:45:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.283 15:45:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.283 15:45:21 -- paths/export.sh@5 -- # export PATH 00:03:54.283 15:45:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.283 15:45:21 -- nvmf/common.sh@47 -- # : 0 00:03:54.283 15:45:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:54.283 15:45:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:54.283 15:45:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:54.283 15:45:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:54.283 15:45:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:54.283 15:45:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:54.283 15:45:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:54.283 15:45:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:54.283 15:45:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:54.283 15:45:21 -- spdk/autotest.sh@32 -- # uname -s 00:03:54.283 15:45:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:54.283 15:45:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:54.283 15:45:21 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:54.283 15:45:21 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:54.283 15:45:21 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:54.283 15:45:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:54.283 15:45:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:54.283 15:45:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:54.283 15:45:21 -- spdk/autotest.sh@48 -- # udevadm_pid=1000778 00:03:54.283 15:45:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:54.283 15:45:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:54.283 15:45:21 -- pm/common@17 -- # local monitor 00:03:54.283 15:45:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.283 15:45:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.283 15:45:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.283 15:45:21 -- pm/common@21 -- # date +%s 00:03:54.283 15:45:21 -- pm/common@21 -- # date +%s 00:03:54.283 15:45:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.283 15:45:21 -- pm/common@25 -- # sleep 1 00:03:54.283 15:45:21 -- pm/common@21 -- # date +%s 00:03:54.283 15:45:21 -- pm/common@21 -- # date +%s 00:03:54.283 15:45:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051121 00:03:54.283 15:45:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051121 00:03:54.283 15:45:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051121 00:03:54.283 15:45:21 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051121 00:03:54.283 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051121_collect-vmstat.pm.log 00:03:54.283 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051121_collect-cpu-load.pm.log 00:03:54.283 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051121_collect-cpu-temp.pm.log 00:03:54.283 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051121_collect-bmc-pm.bmc.pm.log 00:03:55.216 15:45:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:55.216 15:45:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:55.216 15:45:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:55.216 15:45:22 -- common/autotest_common.sh@10 -- # set +x 00:03:55.216 15:45:22 -- spdk/autotest.sh@59 -- # create_test_list 00:03:55.216 15:45:22 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:55.216 15:45:22 -- common/autotest_common.sh@10 -- # set +x 00:03:55.216 15:45:22 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:55.216 15:45:22 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.216 15:45:22 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.216 15:45:22 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:55.216 15:45:22 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.216 15:45:22 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:55.216 15:45:22 -- common/autotest_common.sh@1455 -- # uname 00:03:55.216 15:45:22 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:55.216 15:45:22 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:55.216 15:45:22 -- common/autotest_common.sh@1475 -- # uname 00:03:55.216 15:45:22 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:55.216 15:45:22 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:55.216 15:45:22 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:55.216 15:45:22 -- spdk/autotest.sh@72 -- # hash lcov 00:03:55.216 15:45:22 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:55.216 15:45:22 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:55.216 --rc lcov_branch_coverage=1 00:03:55.216 --rc lcov_function_coverage=1 00:03:55.216 --rc genhtml_branch_coverage=1 00:03:55.216 --rc genhtml_function_coverage=1 00:03:55.216 --rc genhtml_legend=1 00:03:55.216 --rc geninfo_all_blocks=1 00:03:55.216 ' 00:03:55.216 15:45:22 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:55.216 --rc lcov_branch_coverage=1 00:03:55.216 --rc lcov_function_coverage=1 00:03:55.216 --rc genhtml_branch_coverage=1 00:03:55.216 --rc genhtml_function_coverage=1 00:03:55.216 --rc genhtml_legend=1 00:03:55.216 --rc geninfo_all_blocks=1 00:03:55.216 ' 00:03:55.216 15:45:22 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:55.216 --rc lcov_branch_coverage=1 00:03:55.216 --rc lcov_function_coverage=1 00:03:55.216 --rc genhtml_branch_coverage=1 00:03:55.216 --rc genhtml_function_coverage=1 00:03:55.216 --rc genhtml_legend=1 00:03:55.216 --rc geninfo_all_blocks=1 00:03:55.216 --no-external' 00:03:55.216 15:45:22 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:55.216 --rc lcov_branch_coverage=1 00:03:55.216 --rc lcov_function_coverage=1 00:03:55.216 --rc genhtml_branch_coverage=1 00:03:55.216 --rc genhtml_function_coverage=1 00:03:55.216 --rc genhtml_legend=1 00:03:55.216 --rc geninfo_all_blocks=1 00:03:55.216 --no-external' 00:03:55.216 15:45:22 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:55.474 lcov: LCOV version 1.14 00:03:55.474 15:45:22 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:13.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:13.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:28.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:28.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:28.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:28.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:33.712 15:45:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:33.712 15:45:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.712 15:45:59 -- common/autotest_common.sh@10 -- # set +x 00:04:33.712 15:45:59 -- spdk/autotest.sh@91 -- # rm -f 00:04:33.712 15:45:59 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.970 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:33.970 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:33.970 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:33.970 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:33.970 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:33.970 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:33.970 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:33.970 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:33.970 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:33.970 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:33.970 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:33.970 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:33.970 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:33.970 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:33.970 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:34.229 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:34.229 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:34.229 15:46:01 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:34.229 15:46:01 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:34.229 15:46:01 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:34.229 15:46:01 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:34.229 15:46:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.229 15:46:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:34.229 15:46:01 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:34.229 15:46:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:34.229 15:46:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.229 15:46:01 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:34.229 15:46:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:34.229 15:46:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:34.229 15:46:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:34.229 15:46:01 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:34.229 15:46:01 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:34.229 No valid GPT data, bailing 00:04:34.229 15:46:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:34.229 15:46:01 -- scripts/common.sh@391 -- # pt= 00:04:34.229 15:46:01 -- scripts/common.sh@392 -- # return 1 00:04:34.229 15:46:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:34.229 1+0 records in 00:04:34.229 1+0 records out 00:04:34.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00230015 s, 456 MB/s 00:04:34.229 15:46:01 -- spdk/autotest.sh@118 -- # sync 00:04:34.229 15:46:01 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:34.229 15:46:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:34.229 15:46:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:36.758 15:46:03 -- spdk/autotest.sh@124 -- # uname -s 00:04:36.758 15:46:03 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:36.758 15:46:03 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:36.758 15:46:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.758 15:46:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.758 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:04:36.758 ************************************ 00:04:36.758 START TEST setup.sh 00:04:36.758 ************************************ 00:04:36.758 15:46:03 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:36.758 * Looking for test storage... 00:04:36.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:36.758 15:46:03 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:36.758 15:46:03 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:36.758 15:46:03 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:36.758 15:46:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.758 15:46:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.758 15:46:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.758 ************************************ 00:04:36.758 START TEST acl 00:04:36.758 ************************************ 00:04:36.758 15:46:03 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:36.758 * Looking for test storage... 00:04:36.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:36.758 15:46:03 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:36.758 15:46:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:36.758 15:46:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:36.758 15:46:03 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:36.758 15:46:03 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:36.758 15:46:03 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:36.758 15:46:03 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:36.758 15:46:03 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:36.758 15:46:03 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:36.758 15:46:03 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:36.758 15:46:03 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:36.758 15:46:03 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:36.758 15:46:03 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:36.758 15:46:03 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:36.758 15:46:03 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.758 15:46:03 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:37.693 15:46:04 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:37.693 15:46:04 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:37.693 15:46:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.693 15:46:04 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:37.693 15:46:04 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.693 15:46:04 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:39.069 Hugepages 00:04:39.069 node hugesize free / total 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.069 00:04:39.069 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.069 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:39.070 15:46:05 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:39.070 15:46:05 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.070 15:46:05 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.070 15:46:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:39.070 ************************************ 00:04:39.070 START TEST denied 00:04:39.070 ************************************ 00:04:39.070 15:46:05 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:39.070 15:46:05 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:39.070 15:46:05 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:39.070 15:46:05 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:39.070 15:46:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.070 15:46:05 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.444 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:40.444 15:46:07 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:40.444 15:46:07 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:40.444 15:46:07 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:40.444 15:46:07 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:40.444 15:46:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:40.444 15:46:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:40.444 15:46:07 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:40.444 15:46:07 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:40.444 15:46:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.444 15:46:07 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.976 00:04:42.976 real 0m3.776s 00:04:42.976 user 0m1.136s 00:04:42.976 sys 0m1.759s 00:04:42.976 15:46:09 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.976 15:46:09 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:42.976 ************************************ 00:04:42.976 END TEST denied 00:04:42.976 ************************************ 00:04:42.976 15:46:09 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:42.976 15:46:09 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:42.976 15:46:09 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.976 15:46:09 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.976 15:46:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:42.976 ************************************ 00:04:42.976 START TEST allowed 00:04:42.976 ************************************ 00:04:42.976 15:46:09 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:42.976 15:46:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:42.976 15:46:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:42.976 15:46:09 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:42.976 15:46:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.976 15:46:09 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.879 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:44.879 15:46:11 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:44.879 15:46:11 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:44.879 15:46:11 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:44.879 15:46:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.879 15:46:11 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.776 00:04:46.776 real 0m3.770s 00:04:46.776 user 0m1.011s 00:04:46.776 sys 0m1.602s 00:04:46.776 15:46:13 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.776 15:46:13 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:46.776 ************************************ 00:04:46.776 END TEST allowed 00:04:46.776 ************************************ 00:04:46.776 15:46:13 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:46.777 00:04:46.777 real 0m10.178s 00:04:46.777 user 0m3.212s 00:04:46.777 sys 0m4.998s 00:04:46.777 15:46:13 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.777 15:46:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:46.777 ************************************ 00:04:46.777 END TEST acl 00:04:46.777 ************************************ 00:04:46.777 15:46:13 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:46.777 15:46:13 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:46.777 15:46:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.777 15:46:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.777 15:46:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:46.777 ************************************ 00:04:46.777 START TEST hugepages 00:04:46.777 ************************************ 00:04:46.777 15:46:13 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:46.777 * Looking for test storage... 00:04:46.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43685468 kB' 'MemAvailable: 47188720 kB' 'Buffers: 2704 kB' 'Cached: 10299272 kB' 'SwapCached: 0 kB' 'Active: 7299612 kB' 'Inactive: 3506596 kB' 'Active(anon): 6905020 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507576 kB' 'Mapped: 215220 kB' 'Shmem: 6400788 kB' 'KReclaimable: 191808 kB' 'Slab: 559940 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 368132 kB' 'KernelStack: 12896 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 8020236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.777 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.778 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:46.779 15:46:13 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:46.779 15:46:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.779 15:46:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.779 15:46:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.779 ************************************ 00:04:46.779 START TEST default_setup 00:04:46.779 ************************************ 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.779 15:46:13 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.162 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:48.162 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:48.162 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:48.162 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:48.162 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:48.162 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:48.162 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:48.162 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:48.162 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:48.162 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:48.162 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:48.162 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:48.162 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:48.162 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:48.162 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:48.162 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:49.142 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.142 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45782408 kB' 'MemAvailable: 49285652 kB' 'Buffers: 2704 kB' 'Cached: 10299372 kB' 'SwapCached: 0 kB' 'Active: 7317888 kB' 'Inactive: 3506596 kB' 'Active(anon): 6923296 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525252 kB' 'Mapped: 215352 kB' 'Shmem: 6400888 kB' 'KReclaimable: 191792 kB' 'Slab: 559528 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367736 kB' 'KernelStack: 12784 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8037660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.143 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45782408 kB' 'MemAvailable: 49285652 kB' 'Buffers: 2704 kB' 'Cached: 10299372 kB' 'SwapCached: 0 kB' 'Active: 7317720 kB' 'Inactive: 3506596 kB' 'Active(anon): 6923128 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525444 kB' 'Mapped: 215316 kB' 'Shmem: 6400888 kB' 'KReclaimable: 191792 kB' 'Slab: 559528 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367736 kB' 'KernelStack: 12784 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8037680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.144 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.145 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45782408 kB' 'MemAvailable: 49285652 kB' 'Buffers: 2704 kB' 'Cached: 10299388 kB' 'SwapCached: 0 kB' 'Active: 7317280 kB' 'Inactive: 3506596 kB' 'Active(anon): 6922688 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524976 kB' 'Mapped: 215240 kB' 'Shmem: 6400904 kB' 'KReclaimable: 191792 kB' 'Slab: 559536 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367744 kB' 'KernelStack: 12768 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8037700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.146 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.147 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:49.148 nr_hugepages=1024 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.148 resv_hugepages=0 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.148 surplus_hugepages=0 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.148 anon_hugepages=0 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45782956 kB' 'MemAvailable: 49286200 kB' 'Buffers: 2704 kB' 'Cached: 10299416 kB' 'SwapCached: 0 kB' 'Active: 7317576 kB' 'Inactive: 3506596 kB' 'Active(anon): 6922984 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525252 kB' 'Mapped: 215240 kB' 'Shmem: 6400932 kB' 'KReclaimable: 191792 kB' 'Slab: 559528 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367736 kB' 'KernelStack: 12800 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8037724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.148 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.149 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21084744 kB' 'MemUsed: 11792196 kB' 'SwapCached: 0 kB' 'Active: 5485016 kB' 'Inactive: 3263864 kB' 'Active(anon): 5296444 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8467052 kB' 'Mapped: 85248 kB' 'AnonPages: 285024 kB' 'Shmem: 5014616 kB' 'KernelStack: 7144 kB' 'PageTables: 4952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121440 kB' 'Slab: 313472 kB' 'SReclaimable: 121440 kB' 'SUnreclaim: 192032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.150 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:49.151 node0=1024 expecting 1024 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:49.151 00:04:49.151 real 0m2.497s 00:04:49.151 user 0m0.692s 00:04:49.151 sys 0m0.911s 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.151 15:46:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:49.151 ************************************ 00:04:49.151 END TEST default_setup 00:04:49.151 ************************************ 00:04:49.151 15:46:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:49.151 15:46:16 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:49.151 15:46:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.151 15:46:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.151 15:46:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.151 ************************************ 00:04:49.151 START TEST per_node_1G_alloc 00:04:49.151 ************************************ 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.151 15:46:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.532 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.532 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:50.532 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.532 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.532 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.532 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.532 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.532 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.532 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.532 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.532 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.532 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.532 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.532 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.532 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.532 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.532 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.532 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:50.532 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:50.532 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:50.532 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.532 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45787000 kB' 'MemAvailable: 49290244 kB' 'Buffers: 2704 kB' 'Cached: 10299484 kB' 'SwapCached: 0 kB' 'Active: 7317560 kB' 'Inactive: 3506596 kB' 'Active(anon): 6922968 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525228 kB' 'Mapped: 215356 kB' 'Shmem: 6401000 kB' 'KReclaimable: 191792 kB' 'Slab: 559420 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367628 kB' 'KernelStack: 12784 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8037900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.533 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45786796 kB' 'MemAvailable: 49290040 kB' 'Buffers: 2704 kB' 'Cached: 10299484 kB' 'SwapCached: 0 kB' 'Active: 7317680 kB' 'Inactive: 3506596 kB' 'Active(anon): 6923088 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525400 kB' 'Mapped: 215380 kB' 'Shmem: 6401000 kB' 'KReclaimable: 191792 kB' 'Slab: 559424 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367632 kB' 'KernelStack: 12832 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8037920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.534 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.535 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45786472 kB' 'MemAvailable: 49289716 kB' 'Buffers: 2704 kB' 'Cached: 10299504 kB' 'SwapCached: 0 kB' 'Active: 7317928 kB' 'Inactive: 3506596 kB' 'Active(anon): 6923336 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525584 kB' 'Mapped: 215332 kB' 'Shmem: 6401020 kB' 'KReclaimable: 191792 kB' 'Slab: 559424 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367632 kB' 'KernelStack: 12816 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8037940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.536 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.537 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.538 nr_hugepages=1024 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.538 resv_hugepages=0 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.538 surplus_hugepages=0 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.538 anon_hugepages=0 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45787376 kB' 'MemAvailable: 49290620 kB' 'Buffers: 2704 kB' 'Cached: 10299504 kB' 'SwapCached: 0 kB' 'Active: 7317020 kB' 'Inactive: 3506596 kB' 'Active(anon): 6922428 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524640 kB' 'Mapped: 215252 kB' 'Shmem: 6401020 kB' 'KReclaimable: 191792 kB' 'Slab: 559440 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367648 kB' 'KernelStack: 12800 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8037964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.538 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.539 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22128048 kB' 'MemUsed: 10748892 kB' 'SwapCached: 0 kB' 'Active: 5484224 kB' 'Inactive: 3263864 kB' 'Active(anon): 5295652 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8467160 kB' 'Mapped: 85248 kB' 'AnonPages: 284044 kB' 'Shmem: 5014724 kB' 'KernelStack: 7080 kB' 'PageTables: 4824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121440 kB' 'Slab: 313460 kB' 'SReclaimable: 121440 kB' 'SUnreclaim: 192020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.540 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.541 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23659328 kB' 'MemUsed: 4005424 kB' 'SwapCached: 0 kB' 'Active: 1833084 kB' 'Inactive: 242732 kB' 'Active(anon): 1627064 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242732 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1835096 kB' 'Mapped: 130004 kB' 'AnonPages: 240820 kB' 'Shmem: 1386344 kB' 'KernelStack: 5704 kB' 'PageTables: 3464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70352 kB' 'Slab: 245980 kB' 'SReclaimable: 70352 kB' 'SUnreclaim: 175628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.542 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:50.543 node0=512 expecting 512 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:50.543 node1=512 expecting 512 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:50.543 00:04:50.543 real 0m1.382s 00:04:50.543 user 0m0.607s 00:04:50.543 sys 0m0.734s 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.543 15:46:17 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:50.543 ************************************ 00:04:50.543 END TEST per_node_1G_alloc 00:04:50.543 ************************************ 00:04:50.543 15:46:17 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:50.543 15:46:17 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:50.543 15:46:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.543 15:46:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.543 15:46:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:50.803 ************************************ 00:04:50.803 START TEST even_2G_alloc 00:04:50.803 ************************************ 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.803 15:46:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:51.736 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:51.736 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:51.736 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:51.736 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:51.736 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:51.736 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:51.736 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:51.736 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:51.736 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:51.736 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:51.736 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:51.736 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:51.736 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:51.736 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:51.736 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:51.736 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:51.736 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45778240 kB' 'MemAvailable: 49281484 kB' 'Buffers: 2704 kB' 'Cached: 10299616 kB' 'SwapCached: 0 kB' 'Active: 7317772 kB' 'Inactive: 3506596 kB' 'Active(anon): 6923180 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525308 kB' 'Mapped: 215384 kB' 'Shmem: 6401132 kB' 'KReclaimable: 191792 kB' 'Slab: 559556 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367764 kB' 'KernelStack: 12784 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8038032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.003 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45778368 kB' 'MemAvailable: 49281612 kB' 'Buffers: 2704 kB' 'Cached: 10299620 kB' 'SwapCached: 0 kB' 'Active: 7317584 kB' 'Inactive: 3506596 kB' 'Active(anon): 6922992 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525100 kB' 'Mapped: 215344 kB' 'Shmem: 6401136 kB' 'KReclaimable: 191792 kB' 'Slab: 559536 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367744 kB' 'KernelStack: 12784 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8038052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.004 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.005 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45779064 kB' 'MemAvailable: 49282308 kB' 'Buffers: 2704 kB' 'Cached: 10299636 kB' 'SwapCached: 0 kB' 'Active: 7317504 kB' 'Inactive: 3506596 kB' 'Active(anon): 6922912 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524988 kB' 'Mapped: 215264 kB' 'Shmem: 6401152 kB' 'KReclaimable: 191792 kB' 'Slab: 559500 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367708 kB' 'KernelStack: 12800 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8038072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.006 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.007 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.008 nr_hugepages=1024 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.008 resv_hugepages=0 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.008 surplus_hugepages=0 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.008 anon_hugepages=0 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45778812 kB' 'MemAvailable: 49282056 kB' 'Buffers: 2704 kB' 'Cached: 10299656 kB' 'SwapCached: 0 kB' 'Active: 7318164 kB' 'Inactive: 3506596 kB' 'Active(anon): 6923572 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525688 kB' 'Mapped: 215264 kB' 'Shmem: 6401172 kB' 'KReclaimable: 191792 kB' 'Slab: 559500 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367708 kB' 'KernelStack: 12864 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8038096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22121996 kB' 'MemUsed: 10754944 kB' 'SwapCached: 0 kB' 'Active: 5483680 kB' 'Inactive: 3263864 kB' 'Active(anon): 5295108 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8467240 kB' 'Mapped: 85248 kB' 'AnonPages: 283408 kB' 'Shmem: 5014804 kB' 'KernelStack: 7064 kB' 'PageTables: 4768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121440 kB' 'Slab: 313376 kB' 'SReclaimable: 121440 kB' 'SUnreclaim: 191936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23656816 kB' 'MemUsed: 4007936 kB' 'SwapCached: 0 kB' 'Active: 1833636 kB' 'Inactive: 242732 kB' 'Active(anon): 1627616 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242732 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1835160 kB' 'Mapped: 130016 kB' 'AnonPages: 241332 kB' 'Shmem: 1386408 kB' 'KernelStack: 5720 kB' 'PageTables: 3460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70352 kB' 'Slab: 246124 kB' 'SReclaimable: 70352 kB' 'SUnreclaim: 175772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:52.013 node0=512 expecting 512 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:52.013 node1=512 expecting 512 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:52.013 00:04:52.013 real 0m1.368s 00:04:52.013 user 0m0.568s 00:04:52.013 sys 0m0.756s 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.013 15:46:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.013 ************************************ 00:04:52.013 END TEST even_2G_alloc 00:04:52.013 ************************************ 00:04:52.013 15:46:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:52.013 15:46:18 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:52.013 15:46:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.013 15:46:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.013 15:46:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.013 ************************************ 00:04:52.013 START TEST odd_alloc 00:04:52.013 ************************************ 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.013 15:46:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.391 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.391 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:53.391 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.391 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.391 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.391 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.391 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.391 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.391 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.391 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.391 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.391 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.391 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.391 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.391 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.391 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.391 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45798824 kB' 'MemAvailable: 49302068 kB' 'Buffers: 2704 kB' 'Cached: 10299752 kB' 'SwapCached: 0 kB' 'Active: 7314396 kB' 'Inactive: 3506596 kB' 'Active(anon): 6919804 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521688 kB' 'Mapped: 214368 kB' 'Shmem: 6401268 kB' 'KReclaimable: 191792 kB' 'Slab: 559256 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367464 kB' 'KernelStack: 12736 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8022828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.392 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45801564 kB' 'MemAvailable: 49304808 kB' 'Buffers: 2704 kB' 'Cached: 10299756 kB' 'SwapCached: 0 kB' 'Active: 7314664 kB' 'Inactive: 3506596 kB' 'Active(anon): 6920072 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522000 kB' 'Mapped: 214328 kB' 'Shmem: 6401272 kB' 'KReclaimable: 191792 kB' 'Slab: 559268 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367476 kB' 'KernelStack: 12736 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8022848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.393 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45802600 kB' 'MemAvailable: 49305844 kB' 'Buffers: 2704 kB' 'Cached: 10299776 kB' 'SwapCached: 0 kB' 'Active: 7314580 kB' 'Inactive: 3506596 kB' 'Active(anon): 6919988 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521888 kB' 'Mapped: 214328 kB' 'Shmem: 6401292 kB' 'KReclaimable: 191792 kB' 'Slab: 559344 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367552 kB' 'KernelStack: 12768 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8022868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.394 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:53.395 nr_hugepages=1025 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.395 resv_hugepages=0 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.395 surplus_hugepages=0 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.395 anon_hugepages=0 00:04:53.395 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45802656 kB' 'MemAvailable: 49305900 kB' 'Buffers: 2704 kB' 'Cached: 10299796 kB' 'SwapCached: 0 kB' 'Active: 7314288 kB' 'Inactive: 3506596 kB' 'Active(anon): 6919696 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521576 kB' 'Mapped: 214328 kB' 'Shmem: 6401312 kB' 'KReclaimable: 191792 kB' 'Slab: 559344 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367552 kB' 'KernelStack: 12768 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8022888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22134512 kB' 'MemUsed: 10742428 kB' 'SwapCached: 0 kB' 'Active: 5483256 kB' 'Inactive: 3263864 kB' 'Active(anon): 5294684 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8467252 kB' 'Mapped: 84520 kB' 'AnonPages: 282952 kB' 'Shmem: 5014816 kB' 'KernelStack: 7032 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121440 kB' 'Slab: 313304 kB' 'SReclaimable: 121440 kB' 'SUnreclaim: 191864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23669560 kB' 'MemUsed: 3995192 kB' 'SwapCached: 0 kB' 'Active: 1831084 kB' 'Inactive: 242732 kB' 'Active(anon): 1625064 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242732 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1835292 kB' 'Mapped: 129808 kB' 'AnonPages: 238592 kB' 'Shmem: 1386540 kB' 'KernelStack: 5720 kB' 'PageTables: 3396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70352 kB' 'Slab: 246040 kB' 'SReclaimable: 70352 kB' 'SUnreclaim: 175688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:53.399 node0=512 expecting 513 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:53.399 node1=513 expecting 512 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:53.399 00:04:53.399 real 0m1.347s 00:04:53.399 user 0m0.566s 00:04:53.399 sys 0m0.740s 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.399 15:46:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:53.399 ************************************ 00:04:53.399 END TEST odd_alloc 00:04:53.399 ************************************ 00:04:53.399 15:46:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:53.399 15:46:20 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:53.399 15:46:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.399 15:46:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.399 15:46:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:53.399 ************************************ 00:04:53.399 START TEST custom_alloc 00:04:53.399 ************************************ 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.399 15:46:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.777 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.777 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.777 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.777 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.777 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.777 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.777 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.777 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.777 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.777 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.777 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.777 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.777 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.777 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.777 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.777 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.777 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44742416 kB' 'MemAvailable: 48245660 kB' 'Buffers: 2704 kB' 'Cached: 10299880 kB' 'SwapCached: 0 kB' 'Active: 7314600 kB' 'Inactive: 3506596 kB' 'Active(anon): 6920008 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521272 kB' 'Mapped: 214356 kB' 'Shmem: 6401396 kB' 'KReclaimable: 191792 kB' 'Slab: 559100 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367308 kB' 'KernelStack: 12672 kB' 'PageTables: 7396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8023084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.777 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.778 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44741912 kB' 'MemAvailable: 48245156 kB' 'Buffers: 2704 kB' 'Cached: 10299884 kB' 'SwapCached: 0 kB' 'Active: 7314620 kB' 'Inactive: 3506596 kB' 'Active(anon): 6920028 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521756 kB' 'Mapped: 214340 kB' 'Shmem: 6401400 kB' 'KReclaimable: 191792 kB' 'Slab: 559100 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367308 kB' 'KernelStack: 12784 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8023104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.779 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44741660 kB' 'MemAvailable: 48244904 kB' 'Buffers: 2704 kB' 'Cached: 10299888 kB' 'SwapCached: 0 kB' 'Active: 7314128 kB' 'Inactive: 3506596 kB' 'Active(anon): 6919536 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521292 kB' 'Mapped: 214340 kB' 'Shmem: 6401404 kB' 'KReclaimable: 191792 kB' 'Slab: 559104 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367312 kB' 'KernelStack: 12752 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8023124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.780 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.781 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:54.782 nr_hugepages=1536 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.782 resv_hugepages=0 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.782 surplus_hugepages=0 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.782 anon_hugepages=0 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.782 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44741660 kB' 'MemAvailable: 48244904 kB' 'Buffers: 2704 kB' 'Cached: 10299928 kB' 'SwapCached: 0 kB' 'Active: 7314452 kB' 'Inactive: 3506596 kB' 'Active(anon): 6919860 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521656 kB' 'Mapped: 214340 kB' 'Shmem: 6401444 kB' 'KReclaimable: 191792 kB' 'Slab: 559104 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367312 kB' 'KernelStack: 12784 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8023144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.783 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.784 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22125924 kB' 'MemUsed: 10751016 kB' 'SwapCached: 0 kB' 'Active: 5483436 kB' 'Inactive: 3263864 kB' 'Active(anon): 5294864 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8467328 kB' 'Mapped: 84520 kB' 'AnonPages: 283112 kB' 'Shmem: 5014892 kB' 'KernelStack: 7064 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121440 kB' 'Slab: 313188 kB' 'SReclaimable: 121440 kB' 'SUnreclaim: 191748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.785 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22615736 kB' 'MemUsed: 5049016 kB' 'SwapCached: 0 kB' 'Active: 1831084 kB' 'Inactive: 242732 kB' 'Active(anon): 1625064 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242732 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1835324 kB' 'Mapped: 129820 kB' 'AnonPages: 238552 kB' 'Shmem: 1386572 kB' 'KernelStack: 5720 kB' 'PageTables: 3408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70352 kB' 'Slab: 245916 kB' 'SReclaimable: 70352 kB' 'SUnreclaim: 175564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.786 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.787 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.788 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.788 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:54.788 node0=512 expecting 512 00:04:54.788 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.788 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.788 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.788 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:54.788 node1=1024 expecting 1024 00:04:54.788 15:46:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:54.788 00:04:54.788 real 0m1.386s 00:04:54.788 user 0m0.553s 00:04:54.788 sys 0m0.793s 00:04:54.788 15:46:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.788 15:46:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:54.788 ************************************ 00:04:54.788 END TEST custom_alloc 00:04:54.788 ************************************ 00:04:54.788 15:46:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:54.788 15:46:21 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:54.788 15:46:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.788 15:46:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.788 15:46:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.045 ************************************ 00:04:55.045 START TEST no_shrink_alloc 00:04:55.045 ************************************ 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:55.045 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.046 15:46:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.980 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.980 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:55.980 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.980 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.980 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.980 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.980 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.980 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.980 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.980 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.980 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.980 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.980 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.980 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.980 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.980 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.980 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.244 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:56.244 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45771244 kB' 'MemAvailable: 49274488 kB' 'Buffers: 2704 kB' 'Cached: 10300008 kB' 'SwapCached: 0 kB' 'Active: 7320096 kB' 'Inactive: 3506596 kB' 'Active(anon): 6925504 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527284 kB' 'Mapped: 214924 kB' 'Shmem: 6401524 kB' 'KReclaimable: 191792 kB' 'Slab: 559244 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367452 kB' 'KernelStack: 12816 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8029620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196068 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45773424 kB' 'MemAvailable: 49276668 kB' 'Buffers: 2704 kB' 'Cached: 10300008 kB' 'SwapCached: 0 kB' 'Active: 7321124 kB' 'Inactive: 3506596 kB' 'Active(anon): 6926532 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528328 kB' 'Mapped: 215276 kB' 'Shmem: 6401524 kB' 'KReclaimable: 191792 kB' 'Slab: 559244 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367452 kB' 'KernelStack: 12832 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8029636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196052 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.246 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45780792 kB' 'MemAvailable: 49284036 kB' 'Buffers: 2704 kB' 'Cached: 10300044 kB' 'SwapCached: 0 kB' 'Active: 7314540 kB' 'Inactive: 3506596 kB' 'Active(anon): 6919948 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521640 kB' 'Mapped: 214356 kB' 'Shmem: 6401560 kB' 'KReclaimable: 191792 kB' 'Slab: 559220 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367428 kB' 'KernelStack: 12752 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8023540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.251 nr_hugepages=1024 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.251 resv_hugepages=0 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.251 surplus_hugepages=0 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.251 anon_hugepages=0 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45780540 kB' 'MemAvailable: 49283784 kB' 'Buffers: 2704 kB' 'Cached: 10300044 kB' 'SwapCached: 0 kB' 'Active: 7314392 kB' 'Inactive: 3506596 kB' 'Active(anon): 6919800 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521512 kB' 'Mapped: 214356 kB' 'Shmem: 6401560 kB' 'KReclaimable: 191792 kB' 'Slab: 559220 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367428 kB' 'KernelStack: 12784 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8023560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.252 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21079488 kB' 'MemUsed: 11797452 kB' 'SwapCached: 0 kB' 'Active: 5483312 kB' 'Inactive: 3263864 kB' 'Active(anon): 5294740 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8467384 kB' 'Mapped: 84520 kB' 'AnonPages: 282892 kB' 'Shmem: 5014948 kB' 'KernelStack: 7048 kB' 'PageTables: 4592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121440 kB' 'Slab: 313272 kB' 'SReclaimable: 121440 kB' 'SUnreclaim: 191832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.253 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:56.254 node0=1024 expecting 1024 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.254 15:46:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.629 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.630 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:57.630 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.630 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.630 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.630 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.630 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.630 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.630 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.630 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.630 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.630 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.630 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.630 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.630 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.630 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.630 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.630 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45754368 kB' 'MemAvailable: 49257612 kB' 'Buffers: 2704 kB' 'Cached: 10300124 kB' 'SwapCached: 0 kB' 'Active: 7315364 kB' 'Inactive: 3506596 kB' 'Active(anon): 6920772 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522348 kB' 'Mapped: 214388 kB' 'Shmem: 6401640 kB' 'KReclaimable: 191792 kB' 'Slab: 559372 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367580 kB' 'KernelStack: 12768 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8023948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.630 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45754792 kB' 'MemAvailable: 49258036 kB' 'Buffers: 2704 kB' 'Cached: 10300128 kB' 'SwapCached: 0 kB' 'Active: 7315000 kB' 'Inactive: 3506596 kB' 'Active(anon): 6920408 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521972 kB' 'Mapped: 214364 kB' 'Shmem: 6401644 kB' 'KReclaimable: 191792 kB' 'Slab: 559352 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367560 kB' 'KernelStack: 12784 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8023964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.631 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.632 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45756136 kB' 'MemAvailable: 49259380 kB' 'Buffers: 2704 kB' 'Cached: 10300148 kB' 'SwapCached: 0 kB' 'Active: 7314852 kB' 'Inactive: 3506596 kB' 'Active(anon): 6920260 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521808 kB' 'Mapped: 214364 kB' 'Shmem: 6401664 kB' 'KReclaimable: 191792 kB' 'Slab: 559400 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367608 kB' 'KernelStack: 12752 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8023988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.633 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.634 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:57.635 nr_hugepages=1024 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.635 resv_hugepages=0 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.635 surplus_hugepages=0 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.635 anon_hugepages=0 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45756540 kB' 'MemAvailable: 49259784 kB' 'Buffers: 2704 kB' 'Cached: 10300168 kB' 'SwapCached: 0 kB' 'Active: 7314960 kB' 'Inactive: 3506596 kB' 'Active(anon): 6920368 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521948 kB' 'Mapped: 214364 kB' 'Shmem: 6401684 kB' 'KReclaimable: 191792 kB' 'Slab: 559400 kB' 'SReclaimable: 191792 kB' 'SUnreclaim: 367608 kB' 'KernelStack: 12784 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8024008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 13850624 kB' 'DirectMap1G: 53477376 kB' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.635 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.636 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.636 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.636 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.636 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.636 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.636 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.636 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.636 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.896 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21082876 kB' 'MemUsed: 11794064 kB' 'SwapCached: 0 kB' 'Active: 5483904 kB' 'Inactive: 3263864 kB' 'Active(anon): 5295332 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8467404 kB' 'Mapped: 84520 kB' 'AnonPages: 283492 kB' 'Shmem: 5014968 kB' 'KernelStack: 7064 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121440 kB' 'Slab: 313332 kB' 'SReclaimable: 121440 kB' 'SUnreclaim: 191892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.897 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.898 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.899 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.899 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.899 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:57.899 node0=1024 expecting 1024 00:04:57.899 15:46:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:57.899 00:04:57.899 real 0m2.891s 00:04:57.899 user 0m1.175s 00:04:57.899 sys 0m1.644s 00:04:57.899 15:46:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.899 15:46:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:57.899 ************************************ 00:04:57.899 END TEST no_shrink_alloc 00:04:57.899 ************************************ 00:04:57.899 15:46:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:57.899 15:46:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:57.899 00:04:57.899 real 0m11.253s 00:04:57.899 user 0m4.312s 00:04:57.899 sys 0m5.829s 00:04:57.899 15:46:24 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.899 15:46:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:57.899 ************************************ 00:04:57.899 END TEST hugepages 00:04:57.899 ************************************ 00:04:57.899 15:46:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:57.899 15:46:24 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:57.899 15:46:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.899 15:46:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.899 15:46:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:57.899 ************************************ 00:04:57.899 START TEST driver 00:04:57.899 ************************************ 00:04:57.899 15:46:24 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:57.899 * Looking for test storage... 00:04:57.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:57.899 15:46:24 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:57.899 15:46:24 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.899 15:46:24 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:00.433 15:46:27 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:00.433 15:46:27 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.433 15:46:27 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.433 15:46:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:00.433 ************************************ 00:05:00.433 START TEST guess_driver 00:05:00.433 ************************************ 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:00.433 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:00.433 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:00.433 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:00.433 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:00.433 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:00.433 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:00.433 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:00.433 Looking for driver=vfio-pci 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.433 15:46:27 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.810 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.810 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.810 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.811 15:46:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.749 15:46:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.749 15:46:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.749 15:46:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.749 15:46:29 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:02.749 15:46:29 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:02.749 15:46:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.749 15:46:29 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.284 00:05:05.284 real 0m4.782s 00:05:05.284 user 0m1.089s 00:05:05.284 sys 0m1.785s 00:05:05.284 15:46:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.284 15:46:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:05.284 ************************************ 00:05:05.284 END TEST guess_driver 00:05:05.284 ************************************ 00:05:05.284 15:46:31 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:05.284 00:05:05.284 real 0m7.311s 00:05:05.284 user 0m1.584s 00:05:05.284 sys 0m2.830s 00:05:05.284 15:46:31 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.284 15:46:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:05.284 ************************************ 00:05:05.284 END TEST driver 00:05:05.284 ************************************ 00:05:05.284 15:46:32 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:05.284 15:46:32 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:05.284 15:46:32 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.284 15:46:32 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.284 15:46:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:05.284 ************************************ 00:05:05.284 START TEST devices 00:05:05.284 ************************************ 00:05:05.284 15:46:32 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:05.284 * Looking for test storage... 00:05:05.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:05.284 15:46:32 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:05.284 15:46:32 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:05.284 15:46:32 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.284 15:46:32 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:06.665 15:46:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:06.665 15:46:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:06.665 15:46:33 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:06.665 15:46:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:06.665 15:46:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:06.665 15:46:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:06.665 15:46:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:06.665 15:46:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:06.665 15:46:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:06.665 15:46:33 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:06.665 No valid GPT data, bailing 00:05:06.665 15:46:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:06.665 15:46:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:06.665 15:46:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:06.665 15:46:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:06.665 15:46:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:06.665 15:46:33 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:06.665 15:46:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:06.665 15:46:33 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.665 15:46:33 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.665 15:46:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:06.665 ************************************ 00:05:06.665 START TEST nvme_mount 00:05:06.665 ************************************ 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:06.665 15:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:08.042 Creating new GPT entries in memory. 00:05:08.042 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:08.042 other utilities. 00:05:08.042 15:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:08.042 15:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.042 15:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:08.042 15:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:08.042 15:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:08.974 Creating new GPT entries in memory. 00:05:08.974 The operation has completed successfully. 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1021127 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.974 15:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.908 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.909 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.168 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:10.168 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:10.168 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.168 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:10.168 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:10.168 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:10.168 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.168 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.168 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:10.168 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:10.168 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:10.168 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:10.168 15:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:10.426 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:10.426 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:10.426 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:10.426 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:10.426 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:10.426 15:46:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:10.426 15:46:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.426 15:46:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:10.426 15:46:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:10.426 15:46:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.426 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.427 15:46:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.369 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.627 15:46:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:13.003 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.003 00:05:13.003 real 0m6.210s 00:05:13.003 user 0m1.397s 00:05:13.003 sys 0m2.325s 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.003 15:46:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:13.003 ************************************ 00:05:13.003 END TEST nvme_mount 00:05:13.003 ************************************ 00:05:13.003 15:46:39 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:13.003 15:46:39 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:13.003 15:46:39 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.003 15:46:39 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.003 15:46:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:13.003 ************************************ 00:05:13.003 START TEST dm_mount 00:05:13.003 ************************************ 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.003 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:13.004 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:13.004 15:46:39 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:13.939 Creating new GPT entries in memory. 00:05:13.939 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:13.939 other utilities. 00:05:13.939 15:46:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:13.939 15:46:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.939 15:46:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:13.939 15:46:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:13.939 15:46:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:15.317 Creating new GPT entries in memory. 00:05:15.317 The operation has completed successfully. 00:05:15.317 15:46:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:15.317 15:46:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:15.317 15:46:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:15.317 15:46:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:15.317 15:46:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:16.253 The operation has completed successfully. 00:05:16.253 15:46:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:16.253 15:46:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.253 15:46:42 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1023510 00:05:16.253 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:16.254 15:46:42 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.254 15:46:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.206 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.468 15:46:44 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.403 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.662 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.662 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:18.662 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:18.662 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:18.663 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.663 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:18.663 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:18.663 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.663 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:18.663 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:18.663 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:18.663 15:46:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:18.663 00:05:18.663 real 0m5.688s 00:05:18.663 user 0m0.934s 00:05:18.663 sys 0m1.596s 00:05:18.663 15:46:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.663 15:46:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:18.663 ************************************ 00:05:18.663 END TEST dm_mount 00:05:18.663 ************************************ 00:05:18.663 15:46:45 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:18.663 15:46:45 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:18.663 15:46:45 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:18.663 15:46:45 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.663 15:46:45 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.663 15:46:45 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:18.663 15:46:45 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:18.663 15:46:45 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:18.921 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:18.921 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:18.921 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:18.921 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:18.921 15:46:45 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:18.921 15:46:45 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.921 15:46:45 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:18.921 15:46:45 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.921 15:46:45 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:18.921 15:46:45 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:18.921 15:46:45 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:18.921 00:05:18.921 real 0m13.783s 00:05:18.921 user 0m2.943s 00:05:18.921 sys 0m4.952s 00:05:18.921 15:46:45 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.921 15:46:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:18.921 ************************************ 00:05:18.921 END TEST devices 00:05:18.921 ************************************ 00:05:18.921 15:46:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:18.921 00:05:18.921 real 0m42.757s 00:05:18.921 user 0m12.147s 00:05:18.921 sys 0m18.759s 00:05:18.921 15:46:45 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.921 15:46:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:18.921 ************************************ 00:05:18.921 END TEST setup.sh 00:05:18.921 ************************************ 00:05:19.193 15:46:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:19.193 15:46:45 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:20.139 Hugepages 00:05:20.139 node hugesize free / total 00:05:20.139 node0 1048576kB 0 / 0 00:05:20.139 node0 2048kB 2048 / 2048 00:05:20.139 node1 1048576kB 0 / 0 00:05:20.139 node1 2048kB 0 / 0 00:05:20.139 00:05:20.139 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:20.139 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:20.139 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:20.139 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:20.139 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:20.139 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:20.139 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:20.139 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:20.139 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:20.139 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:20.139 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:20.139 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:20.139 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:20.139 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:20.139 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:20.139 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:20.139 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:20.139 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:20.139 15:46:47 -- spdk/autotest.sh@130 -- # uname -s 00:05:20.397 15:46:47 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:20.397 15:46:47 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:20.397 15:46:47 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:21.331 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:21.331 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:21.331 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:21.331 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:21.331 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:21.331 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:21.331 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:21.331 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:21.331 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:21.331 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:21.590 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:21.590 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:21.590 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:21.590 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:21.590 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:21.590 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:22.527 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:22.527 15:46:49 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:23.465 15:46:50 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:23.465 15:46:50 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:23.465 15:46:50 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:23.465 15:46:50 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:23.465 15:46:50 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:23.465 15:46:50 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:23.465 15:46:50 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.465 15:46:50 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:23.465 15:46:50 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:23.724 15:46:50 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:23.724 15:46:50 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:23.724 15:46:50 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:24.660 Waiting for block devices as requested 00:05:24.660 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:24.920 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:24.920 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:25.179 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:25.179 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:25.179 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:25.179 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:25.439 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:25.439 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:25.439 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:25.439 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:25.698 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:25.698 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:25.698 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:25.698 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:25.956 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:25.956 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:25.956 15:46:52 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:25.956 15:46:52 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:25.956 15:46:52 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:25.956 15:46:52 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:25.956 15:46:52 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:25.956 15:46:52 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:25.956 15:46:52 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:25.956 15:46:52 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:25.956 15:46:52 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:25.956 15:46:52 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:26.216 15:46:52 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:26.216 15:46:52 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:26.216 15:46:52 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:26.216 15:46:52 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:26.216 15:46:52 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:26.216 15:46:52 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:26.216 15:46:52 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:26.216 15:46:52 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:26.216 15:46:52 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:26.216 15:46:52 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:26.216 15:46:52 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:26.216 15:46:52 -- common/autotest_common.sh@1557 -- # continue 00:05:26.216 15:46:52 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:26.216 15:46:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.216 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.216 15:46:52 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:26.216 15:46:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.216 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.216 15:46:52 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:27.152 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:27.152 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:27.152 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:27.152 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:27.152 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:27.152 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:27.152 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:27.152 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:27.152 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:27.411 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:27.411 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:27.411 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:27.411 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:27.411 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:27.411 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:27.411 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:28.352 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:28.352 15:46:55 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:28.352 15:46:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.352 15:46:55 -- common/autotest_common.sh@10 -- # set +x 00:05:28.352 15:46:55 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:28.352 15:46:55 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:28.352 15:46:55 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:28.352 15:46:55 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:28.352 15:46:55 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:28.352 15:46:55 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:28.352 15:46:55 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:28.352 15:46:55 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:28.352 15:46:55 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.352 15:46:55 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:28.352 15:46:55 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:28.609 15:46:55 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:28.609 15:46:55 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:28.609 15:46:55 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:28.609 15:46:55 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:28.609 15:46:55 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:28.609 15:46:55 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:28.609 15:46:55 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:28.609 15:46:55 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:28.609 15:46:55 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:28.609 15:46:55 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1028693 00:05:28.609 15:46:55 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.609 15:46:55 -- common/autotest_common.sh@1598 -- # waitforlisten 1028693 00:05:28.609 15:46:55 -- common/autotest_common.sh@829 -- # '[' -z 1028693 ']' 00:05:28.609 15:46:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.609 15:46:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.609 15:46:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.609 15:46:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.609 15:46:55 -- common/autotest_common.sh@10 -- # set +x 00:05:28.609 [2024-07-15 15:46:55.374187] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:28.609 [2024-07-15 15:46:55.374269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028693 ] 00:05:28.609 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.609 [2024-07-15 15:46:55.435594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.867 [2024-07-15 15:46:55.551307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.433 15:46:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.433 15:46:56 -- common/autotest_common.sh@862 -- # return 0 00:05:29.433 15:46:56 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:29.433 15:46:56 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:29.433 15:46:56 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:32.717 nvme0n1 00:05:32.717 15:46:59 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:32.717 [2024-07-15 15:46:59.628952] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:32.717 [2024-07-15 15:46:59.628994] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:32.717 request: 00:05:32.717 { 00:05:32.717 "nvme_ctrlr_name": "nvme0", 00:05:32.717 "password": "test", 00:05:32.717 "method": "bdev_nvme_opal_revert", 00:05:32.717 "req_id": 1 00:05:32.717 } 00:05:32.717 Got JSON-RPC error response 00:05:32.717 response: 00:05:32.717 { 00:05:32.717 "code": -32603, 00:05:32.717 "message": "Internal error" 00:05:32.717 } 00:05:32.975 15:46:59 -- common/autotest_common.sh@1604 -- # true 00:05:32.975 15:46:59 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:32.975 15:46:59 -- common/autotest_common.sh@1608 -- # killprocess 1028693 00:05:32.975 15:46:59 -- common/autotest_common.sh@948 -- # '[' -z 1028693 ']' 00:05:32.975 15:46:59 -- common/autotest_common.sh@952 -- # kill -0 1028693 00:05:32.975 15:46:59 -- common/autotest_common.sh@953 -- # uname 00:05:32.975 15:46:59 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.975 15:46:59 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1028693 00:05:32.976 15:46:59 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.976 15:46:59 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.976 15:46:59 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1028693' 00:05:32.976 killing process with pid 1028693 00:05:32.976 15:46:59 -- common/autotest_common.sh@967 -- # kill 1028693 00:05:32.976 15:46:59 -- common/autotest_common.sh@972 -- # wait 1028693 00:05:34.939 15:47:01 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:34.939 15:47:01 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:34.939 15:47:01 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:34.939 15:47:01 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:34.939 15:47:01 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:34.939 15:47:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.939 15:47:01 -- common/autotest_common.sh@10 -- # set +x 00:05:34.939 15:47:01 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:34.939 15:47:01 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:34.939 15:47:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.939 15:47:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.939 15:47:01 -- common/autotest_common.sh@10 -- # set +x 00:05:34.939 ************************************ 00:05:34.939 START TEST env 00:05:34.939 ************************************ 00:05:34.939 15:47:01 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:34.939 * Looking for test storage... 00:05:34.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:34.939 15:47:01 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.939 15:47:01 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.939 15:47:01 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.939 15:47:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.939 ************************************ 00:05:34.940 START TEST env_memory 00:05:34.940 ************************************ 00:05:34.940 15:47:01 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.940 00:05:34.940 00:05:34.940 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.940 http://cunit.sourceforge.net/ 00:05:34.940 00:05:34.940 00:05:34.940 Suite: memory 00:05:34.940 Test: alloc and free memory map ...[2024-07-15 15:47:01.655443] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:34.940 passed 00:05:34.940 Test: mem map translation ...[2024-07-15 15:47:01.675348] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:34.940 [2024-07-15 15:47:01.675370] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:34.940 [2024-07-15 15:47:01.675419] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:34.940 [2024-07-15 15:47:01.675430] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:34.940 passed 00:05:34.940 Test: mem map registration ...[2024-07-15 15:47:01.715820] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:34.940 [2024-07-15 15:47:01.715839] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:34.940 passed 00:05:34.940 Test: mem map adjacent registrations ...passed 00:05:34.940 00:05:34.940 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.940 suites 1 1 n/a 0 0 00:05:34.940 tests 4 4 4 0 0 00:05:34.940 asserts 152 152 152 0 n/a 00:05:34.940 00:05:34.940 Elapsed time = 0.140 seconds 00:05:34.940 00:05:34.940 real 0m0.148s 00:05:34.940 user 0m0.134s 00:05:34.940 sys 0m0.013s 00:05:34.940 15:47:01 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.940 15:47:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:34.940 ************************************ 00:05:34.940 END TEST env_memory 00:05:34.940 ************************************ 00:05:34.940 15:47:01 env -- common/autotest_common.sh@1142 -- # return 0 00:05:34.940 15:47:01 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:34.940 15:47:01 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.940 15:47:01 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.940 15:47:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.940 ************************************ 00:05:34.940 START TEST env_vtophys 00:05:34.940 ************************************ 00:05:34.940 15:47:01 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:34.940 EAL: lib.eal log level changed from notice to debug 00:05:34.940 EAL: Detected lcore 0 as core 0 on socket 0 00:05:34.940 EAL: Detected lcore 1 as core 1 on socket 0 00:05:34.940 EAL: Detected lcore 2 as core 2 on socket 0 00:05:34.940 EAL: Detected lcore 3 as core 3 on socket 0 00:05:34.940 EAL: Detected lcore 4 as core 4 on socket 0 00:05:34.940 EAL: Detected lcore 5 as core 5 on socket 0 00:05:34.940 EAL: Detected lcore 6 as core 8 on socket 0 00:05:34.940 EAL: Detected lcore 7 as core 9 on socket 0 00:05:34.940 EAL: Detected lcore 8 as core 10 on socket 0 00:05:34.940 EAL: Detected lcore 9 as core 11 on socket 0 00:05:34.940 EAL: Detected lcore 10 as core 12 on socket 0 00:05:34.940 EAL: Detected lcore 11 as core 13 on socket 0 00:05:34.940 EAL: Detected lcore 12 as core 0 on socket 1 00:05:34.940 EAL: Detected lcore 13 as core 1 on socket 1 00:05:34.940 EAL: Detected lcore 14 as core 2 on socket 1 00:05:34.940 EAL: Detected lcore 15 as core 3 on socket 1 00:05:34.940 EAL: Detected lcore 16 as core 4 on socket 1 00:05:34.940 EAL: Detected lcore 17 as core 5 on socket 1 00:05:34.940 EAL: Detected lcore 18 as core 8 on socket 1 00:05:34.940 EAL: Detected lcore 19 as core 9 on socket 1 00:05:34.940 EAL: Detected lcore 20 as core 10 on socket 1 00:05:34.940 EAL: Detected lcore 21 as core 11 on socket 1 00:05:34.940 EAL: Detected lcore 22 as core 12 on socket 1 00:05:34.940 EAL: Detected lcore 23 as core 13 on socket 1 00:05:34.940 EAL: Detected lcore 24 as core 0 on socket 0 00:05:34.940 EAL: Detected lcore 25 as core 1 on socket 0 00:05:34.940 EAL: Detected lcore 26 as core 2 on socket 0 00:05:34.940 EAL: Detected lcore 27 as core 3 on socket 0 00:05:34.940 EAL: Detected lcore 28 as core 4 on socket 0 00:05:34.940 EAL: Detected lcore 29 as core 5 on socket 0 00:05:34.940 EAL: Detected lcore 30 as core 8 on socket 0 00:05:34.940 EAL: Detected lcore 31 as core 9 on socket 0 00:05:34.940 EAL: Detected lcore 32 as core 10 on socket 0 00:05:34.940 EAL: Detected lcore 33 as core 11 on socket 0 00:05:34.940 EAL: Detected lcore 34 as core 12 on socket 0 00:05:34.940 EAL: Detected lcore 35 as core 13 on socket 0 00:05:34.940 EAL: Detected lcore 36 as core 0 on socket 1 00:05:34.940 EAL: Detected lcore 37 as core 1 on socket 1 00:05:34.940 EAL: Detected lcore 38 as core 2 on socket 1 00:05:34.940 EAL: Detected lcore 39 as core 3 on socket 1 00:05:34.940 EAL: Detected lcore 40 as core 4 on socket 1 00:05:34.940 EAL: Detected lcore 41 as core 5 on socket 1 00:05:34.940 EAL: Detected lcore 42 as core 8 on socket 1 00:05:34.940 EAL: Detected lcore 43 as core 9 on socket 1 00:05:34.940 EAL: Detected lcore 44 as core 10 on socket 1 00:05:34.940 EAL: Detected lcore 45 as core 11 on socket 1 00:05:34.940 EAL: Detected lcore 46 as core 12 on socket 1 00:05:34.940 EAL: Detected lcore 47 as core 13 on socket 1 00:05:34.940 EAL: Maximum logical cores by configuration: 128 00:05:34.940 EAL: Detected CPU lcores: 48 00:05:34.940 EAL: Detected NUMA nodes: 2 00:05:34.940 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:34.940 EAL: Detected shared linkage of DPDK 00:05:34.940 EAL: No shared files mode enabled, IPC will be disabled 00:05:35.199 EAL: Bus pci wants IOVA as 'DC' 00:05:35.200 EAL: Buses did not request a specific IOVA mode. 00:05:35.200 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:35.200 EAL: Selected IOVA mode 'VA' 00:05:35.200 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.200 EAL: Probing VFIO support... 00:05:35.200 EAL: IOMMU type 1 (Type 1) is supported 00:05:35.200 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:35.200 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:35.200 EAL: VFIO support initialized 00:05:35.200 EAL: Ask a virtual area of 0x2e000 bytes 00:05:35.200 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:35.200 EAL: Setting up physically contiguous memory... 00:05:35.200 EAL: Setting maximum number of open files to 524288 00:05:35.200 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:35.200 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:35.200 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:35.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.200 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:35.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.200 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:35.200 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:35.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.200 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:35.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.200 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:35.200 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:35.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.200 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:35.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.200 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:35.200 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:35.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.200 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:35.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.200 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:35.200 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:35.200 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:35.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.200 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:35.200 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.200 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:35.200 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:35.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.200 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:35.200 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.200 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:35.200 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:35.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.200 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:35.200 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.200 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:35.200 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:35.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.200 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:35.200 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.200 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:35.200 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:35.200 EAL: Hugepages will be freed exactly as allocated. 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: TSC frequency is ~2700000 KHz 00:05:35.200 EAL: Main lcore 0 is ready (tid=7effb616ea00;cpuset=[0]) 00:05:35.200 EAL: Trying to obtain current memory policy. 00:05:35.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.200 EAL: Restoring previous memory policy: 0 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was expanded by 2MB 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:35.200 EAL: Mem event callback 'spdk:(nil)' registered 00:05:35.200 00:05:35.200 00:05:35.200 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.200 http://cunit.sourceforge.net/ 00:05:35.200 00:05:35.200 00:05:35.200 Suite: components_suite 00:05:35.200 Test: vtophys_malloc_test ...passed 00:05:35.200 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:35.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.200 EAL: Restoring previous memory policy: 4 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was expanded by 4MB 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was shrunk by 4MB 00:05:35.200 EAL: Trying to obtain current memory policy. 00:05:35.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.200 EAL: Restoring previous memory policy: 4 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was expanded by 6MB 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was shrunk by 6MB 00:05:35.200 EAL: Trying to obtain current memory policy. 00:05:35.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.200 EAL: Restoring previous memory policy: 4 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was expanded by 10MB 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was shrunk by 10MB 00:05:35.200 EAL: Trying to obtain current memory policy. 00:05:35.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.200 EAL: Restoring previous memory policy: 4 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was expanded by 18MB 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was shrunk by 18MB 00:05:35.200 EAL: Trying to obtain current memory policy. 00:05:35.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.200 EAL: Restoring previous memory policy: 4 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was expanded by 34MB 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was shrunk by 34MB 00:05:35.200 EAL: Trying to obtain current memory policy. 00:05:35.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.200 EAL: Restoring previous memory policy: 4 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was expanded by 66MB 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was shrunk by 66MB 00:05:35.200 EAL: Trying to obtain current memory policy. 00:05:35.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.200 EAL: Restoring previous memory policy: 4 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was expanded by 130MB 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was shrunk by 130MB 00:05:35.200 EAL: Trying to obtain current memory policy. 00:05:35.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.200 EAL: Restoring previous memory policy: 4 00:05:35.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.200 EAL: request: mp_malloc_sync 00:05:35.200 EAL: No shared files mode enabled, IPC is disabled 00:05:35.200 EAL: Heap on socket 0 was expanded by 258MB 00:05:35.458 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.458 EAL: request: mp_malloc_sync 00:05:35.458 EAL: No shared files mode enabled, IPC is disabled 00:05:35.458 EAL: Heap on socket 0 was shrunk by 258MB 00:05:35.458 EAL: Trying to obtain current memory policy. 00:05:35.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.458 EAL: Restoring previous memory policy: 4 00:05:35.458 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.458 EAL: request: mp_malloc_sync 00:05:35.458 EAL: No shared files mode enabled, IPC is disabled 00:05:35.458 EAL: Heap on socket 0 was expanded by 514MB 00:05:35.717 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.717 EAL: request: mp_malloc_sync 00:05:35.717 EAL: No shared files mode enabled, IPC is disabled 00:05:35.717 EAL: Heap on socket 0 was shrunk by 514MB 00:05:35.717 EAL: Trying to obtain current memory policy. 00:05:35.717 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.975 EAL: Restoring previous memory policy: 4 00:05:35.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.975 EAL: request: mp_malloc_sync 00:05:35.975 EAL: No shared files mode enabled, IPC is disabled 00:05:35.975 EAL: Heap on socket 0 was expanded by 1026MB 00:05:36.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.491 EAL: request: mp_malloc_sync 00:05:36.491 EAL: No shared files mode enabled, IPC is disabled 00:05:36.491 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:36.491 passed 00:05:36.491 00:05:36.491 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.491 suites 1 1 n/a 0 0 00:05:36.491 tests 2 2 2 0 0 00:05:36.491 asserts 497 497 497 0 n/a 00:05:36.491 00:05:36.491 Elapsed time = 1.358 seconds 00:05:36.491 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.491 EAL: request: mp_malloc_sync 00:05:36.491 EAL: No shared files mode enabled, IPC is disabled 00:05:36.491 EAL: Heap on socket 0 was shrunk by 2MB 00:05:36.491 EAL: No shared files mode enabled, IPC is disabled 00:05:36.491 EAL: No shared files mode enabled, IPC is disabled 00:05:36.491 EAL: No shared files mode enabled, IPC is disabled 00:05:36.491 00:05:36.491 real 0m1.475s 00:05:36.491 user 0m0.839s 00:05:36.491 sys 0m0.605s 00:05:36.491 15:47:03 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.491 15:47:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:36.491 ************************************ 00:05:36.491 END TEST env_vtophys 00:05:36.491 ************************************ 00:05:36.491 15:47:03 env -- common/autotest_common.sh@1142 -- # return 0 00:05:36.491 15:47:03 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:36.491 15:47:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.491 15:47:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.491 15:47:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.491 ************************************ 00:05:36.491 START TEST env_pci 00:05:36.491 ************************************ 00:05:36.491 15:47:03 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:36.491 00:05:36.491 00:05:36.491 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.491 http://cunit.sourceforge.net/ 00:05:36.491 00:05:36.491 00:05:36.491 Suite: pci 00:05:36.491 Test: pci_hook ...[2024-07-15 15:47:03.350015] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1029710 has claimed it 00:05:36.491 EAL: Cannot find device (10000:00:01.0) 00:05:36.491 EAL: Failed to attach device on primary process 00:05:36.491 passed 00:05:36.491 00:05:36.491 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.491 suites 1 1 n/a 0 0 00:05:36.491 tests 1 1 1 0 0 00:05:36.491 asserts 25 25 25 0 n/a 00:05:36.491 00:05:36.491 Elapsed time = 0.020 seconds 00:05:36.491 00:05:36.491 real 0m0.032s 00:05:36.491 user 0m0.008s 00:05:36.491 sys 0m0.024s 00:05:36.491 15:47:03 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.491 15:47:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:36.491 ************************************ 00:05:36.491 END TEST env_pci 00:05:36.491 ************************************ 00:05:36.491 15:47:03 env -- common/autotest_common.sh@1142 -- # return 0 00:05:36.491 15:47:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:36.491 15:47:03 env -- env/env.sh@15 -- # uname 00:05:36.491 15:47:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:36.491 15:47:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:36.491 15:47:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.491 15:47:03 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:36.491 15:47:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.491 15:47:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.491 ************************************ 00:05:36.491 START TEST env_dpdk_post_init 00:05:36.491 ************************************ 00:05:36.491 15:47:03 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.748 EAL: Detected CPU lcores: 48 00:05:36.748 EAL: Detected NUMA nodes: 2 00:05:36.748 EAL: Detected shared linkage of DPDK 00:05:36.748 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.748 EAL: Selected IOVA mode 'VA' 00:05:36.748 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.748 EAL: VFIO support initialized 00:05:36.748 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.748 EAL: Using IOMMU type 1 (Type 1) 00:05:36.748 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:36.748 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:36.748 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:36.748 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:36.748 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:36.748 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:36.748 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:36.748 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:36.748 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:36.748 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:36.748 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:36.748 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:37.006 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:37.006 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:37.006 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:37.006 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:37.573 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:40.891 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:40.891 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:41.150 Starting DPDK initialization... 00:05:41.150 Starting SPDK post initialization... 00:05:41.150 SPDK NVMe probe 00:05:41.150 Attaching to 0000:88:00.0 00:05:41.150 Attached to 0000:88:00.0 00:05:41.150 Cleaning up... 00:05:41.150 00:05:41.150 real 0m4.409s 00:05:41.150 user 0m3.272s 00:05:41.150 sys 0m0.191s 00:05:41.150 15:47:07 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.150 15:47:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.150 ************************************ 00:05:41.150 END TEST env_dpdk_post_init 00:05:41.150 ************************************ 00:05:41.150 15:47:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.150 15:47:07 env -- env/env.sh@26 -- # uname 00:05:41.150 15:47:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:41.150 15:47:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.150 15:47:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.150 15:47:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.150 15:47:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.150 ************************************ 00:05:41.150 START TEST env_mem_callbacks 00:05:41.150 ************************************ 00:05:41.150 15:47:07 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.150 EAL: Detected CPU lcores: 48 00:05:41.150 EAL: Detected NUMA nodes: 2 00:05:41.150 EAL: Detected shared linkage of DPDK 00:05:41.150 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.150 EAL: Selected IOVA mode 'VA' 00:05:41.150 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.150 EAL: VFIO support initialized 00:05:41.150 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.150 00:05:41.150 00:05:41.150 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.150 http://cunit.sourceforge.net/ 00:05:41.150 00:05:41.150 00:05:41.150 Suite: memory 00:05:41.150 Test: test ... 00:05:41.150 register 0x200000200000 2097152 00:05:41.150 malloc 3145728 00:05:41.150 register 0x200000400000 4194304 00:05:41.150 buf 0x200000500000 len 3145728 PASSED 00:05:41.150 malloc 64 00:05:41.150 buf 0x2000004fff40 len 64 PASSED 00:05:41.150 malloc 4194304 00:05:41.150 register 0x200000800000 6291456 00:05:41.150 buf 0x200000a00000 len 4194304 PASSED 00:05:41.150 free 0x200000500000 3145728 00:05:41.150 free 0x2000004fff40 64 00:05:41.150 unregister 0x200000400000 4194304 PASSED 00:05:41.150 free 0x200000a00000 4194304 00:05:41.150 unregister 0x200000800000 6291456 PASSED 00:05:41.150 malloc 8388608 00:05:41.150 register 0x200000400000 10485760 00:05:41.150 buf 0x200000600000 len 8388608 PASSED 00:05:41.150 free 0x200000600000 8388608 00:05:41.150 unregister 0x200000400000 10485760 PASSED 00:05:41.150 passed 00:05:41.150 00:05:41.150 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.150 suites 1 1 n/a 0 0 00:05:41.150 tests 1 1 1 0 0 00:05:41.150 asserts 15 15 15 0 n/a 00:05:41.150 00:05:41.150 Elapsed time = 0.005 seconds 00:05:41.150 00:05:41.150 real 0m0.049s 00:05:41.150 user 0m0.015s 00:05:41.150 sys 0m0.033s 00:05:41.150 15:47:07 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.150 15:47:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:41.150 ************************************ 00:05:41.150 END TEST env_mem_callbacks 00:05:41.150 ************************************ 00:05:41.150 15:47:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.150 00:05:41.150 real 0m6.406s 00:05:41.150 user 0m4.387s 00:05:41.150 sys 0m1.059s 00:05:41.150 15:47:07 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.150 15:47:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.150 ************************************ 00:05:41.150 END TEST env 00:05:41.150 ************************************ 00:05:41.150 15:47:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.151 15:47:07 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.151 15:47:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.151 15:47:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.151 15:47:07 -- common/autotest_common.sh@10 -- # set +x 00:05:41.151 ************************************ 00:05:41.151 START TEST rpc 00:05:41.151 ************************************ 00:05:41.151 15:47:07 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.151 * Looking for test storage... 00:05:41.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.151 15:47:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1030366 00:05:41.151 15:47:08 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:41.151 15:47:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.151 15:47:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1030366 00:05:41.151 15:47:08 rpc -- common/autotest_common.sh@829 -- # '[' -z 1030366 ']' 00:05:41.151 15:47:08 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.151 15:47:08 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.151 15:47:08 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.151 15:47:08 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.151 15:47:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.409 [2024-07-15 15:47:08.099261] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:41.409 [2024-07-15 15:47:08.099348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030366 ] 00:05:41.409 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.409 [2024-07-15 15:47:08.160553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.409 [2024-07-15 15:47:08.277110] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.409 [2024-07-15 15:47:08.277181] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1030366' to capture a snapshot of events at runtime. 00:05:41.409 [2024-07-15 15:47:08.277198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.409 [2024-07-15 15:47:08.277211] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.409 [2024-07-15 15:47:08.277222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1030366 for offline analysis/debug. 00:05:41.409 [2024-07-15 15:47:08.277256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.668 15:47:08 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.668 15:47:08 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:41.668 15:47:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.668 15:47:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.668 15:47:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.668 15:47:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.668 15:47:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.668 15:47:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.668 15:47:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.668 ************************************ 00:05:41.668 START TEST rpc_integrity 00:05:41.668 ************************************ 00:05:41.668 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:41.668 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.668 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.668 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.668 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.668 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.668 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.927 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.927 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.927 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.927 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.927 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.927 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.927 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.927 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.927 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.927 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.927 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.927 { 00:05:41.927 "name": "Malloc0", 00:05:41.927 "aliases": [ 00:05:41.927 "57760c46-bd34-46f5-9e98-1d8e76f2eba2" 00:05:41.927 ], 00:05:41.927 "product_name": "Malloc disk", 00:05:41.927 "block_size": 512, 00:05:41.927 "num_blocks": 16384, 00:05:41.927 "uuid": "57760c46-bd34-46f5-9e98-1d8e76f2eba2", 00:05:41.927 "assigned_rate_limits": { 00:05:41.927 "rw_ios_per_sec": 0, 00:05:41.927 "rw_mbytes_per_sec": 0, 00:05:41.927 "r_mbytes_per_sec": 0, 00:05:41.927 "w_mbytes_per_sec": 0 00:05:41.927 }, 00:05:41.927 "claimed": false, 00:05:41.927 "zoned": false, 00:05:41.927 "supported_io_types": { 00:05:41.927 "read": true, 00:05:41.927 "write": true, 00:05:41.927 "unmap": true, 00:05:41.927 "flush": true, 00:05:41.927 "reset": true, 00:05:41.927 "nvme_admin": false, 00:05:41.927 "nvme_io": false, 00:05:41.927 "nvme_io_md": false, 00:05:41.927 "write_zeroes": true, 00:05:41.927 "zcopy": true, 00:05:41.927 "get_zone_info": false, 00:05:41.927 "zone_management": false, 00:05:41.927 "zone_append": false, 00:05:41.927 "compare": false, 00:05:41.927 "compare_and_write": false, 00:05:41.927 "abort": true, 00:05:41.927 "seek_hole": false, 00:05:41.927 "seek_data": false, 00:05:41.927 "copy": true, 00:05:41.927 "nvme_iov_md": false 00:05:41.927 }, 00:05:41.927 "memory_domains": [ 00:05:41.927 { 00:05:41.927 "dma_device_id": "system", 00:05:41.927 "dma_device_type": 1 00:05:41.927 }, 00:05:41.927 { 00:05:41.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.927 "dma_device_type": 2 00:05:41.927 } 00:05:41.927 ], 00:05:41.927 "driver_specific": {} 00:05:41.927 } 00:05:41.927 ]' 00:05:41.927 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.927 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.927 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.927 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.927 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.927 [2024-07-15 15:47:08.674746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.927 [2024-07-15 15:47:08.674792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.927 [2024-07-15 15:47:08.674815] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d99d50 00:05:41.927 [2024-07-15 15:47:08.674830] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.927 [2024-07-15 15:47:08.676349] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.927 [2024-07-15 15:47:08.676376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.927 Passthru0 00:05:41.927 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.927 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.927 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.927 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.927 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.927 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.927 { 00:05:41.927 "name": "Malloc0", 00:05:41.927 "aliases": [ 00:05:41.927 "57760c46-bd34-46f5-9e98-1d8e76f2eba2" 00:05:41.927 ], 00:05:41.927 "product_name": "Malloc disk", 00:05:41.927 "block_size": 512, 00:05:41.927 "num_blocks": 16384, 00:05:41.927 "uuid": "57760c46-bd34-46f5-9e98-1d8e76f2eba2", 00:05:41.927 "assigned_rate_limits": { 00:05:41.927 "rw_ios_per_sec": 0, 00:05:41.927 "rw_mbytes_per_sec": 0, 00:05:41.927 "r_mbytes_per_sec": 0, 00:05:41.927 "w_mbytes_per_sec": 0 00:05:41.927 }, 00:05:41.927 "claimed": true, 00:05:41.927 "claim_type": "exclusive_write", 00:05:41.927 "zoned": false, 00:05:41.927 "supported_io_types": { 00:05:41.927 "read": true, 00:05:41.927 "write": true, 00:05:41.927 "unmap": true, 00:05:41.927 "flush": true, 00:05:41.927 "reset": true, 00:05:41.927 "nvme_admin": false, 00:05:41.927 "nvme_io": false, 00:05:41.927 "nvme_io_md": false, 00:05:41.927 "write_zeroes": true, 00:05:41.927 "zcopy": true, 00:05:41.927 "get_zone_info": false, 00:05:41.927 "zone_management": false, 00:05:41.927 "zone_append": false, 00:05:41.927 "compare": false, 00:05:41.927 "compare_and_write": false, 00:05:41.927 "abort": true, 00:05:41.927 "seek_hole": false, 00:05:41.927 "seek_data": false, 00:05:41.927 "copy": true, 00:05:41.927 "nvme_iov_md": false 00:05:41.927 }, 00:05:41.927 "memory_domains": [ 00:05:41.927 { 00:05:41.927 "dma_device_id": "system", 00:05:41.927 "dma_device_type": 1 00:05:41.927 }, 00:05:41.927 { 00:05:41.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.928 "dma_device_type": 2 00:05:41.928 } 00:05:41.928 ], 00:05:41.928 "driver_specific": {} 00:05:41.928 }, 00:05:41.928 { 00:05:41.928 "name": "Passthru0", 00:05:41.928 "aliases": [ 00:05:41.928 "be38fcd5-5a18-5c97-a3b8-8da6d44369e1" 00:05:41.928 ], 00:05:41.928 "product_name": "passthru", 00:05:41.928 "block_size": 512, 00:05:41.928 "num_blocks": 16384, 00:05:41.928 "uuid": "be38fcd5-5a18-5c97-a3b8-8da6d44369e1", 00:05:41.928 "assigned_rate_limits": { 00:05:41.928 "rw_ios_per_sec": 0, 00:05:41.928 "rw_mbytes_per_sec": 0, 00:05:41.928 "r_mbytes_per_sec": 0, 00:05:41.928 "w_mbytes_per_sec": 0 00:05:41.928 }, 00:05:41.928 "claimed": false, 00:05:41.928 "zoned": false, 00:05:41.928 "supported_io_types": { 00:05:41.928 "read": true, 00:05:41.928 "write": true, 00:05:41.928 "unmap": true, 00:05:41.928 "flush": true, 00:05:41.928 "reset": true, 00:05:41.928 "nvme_admin": false, 00:05:41.928 "nvme_io": false, 00:05:41.928 "nvme_io_md": false, 00:05:41.928 "write_zeroes": true, 00:05:41.928 "zcopy": true, 00:05:41.928 "get_zone_info": false, 00:05:41.928 "zone_management": false, 00:05:41.928 "zone_append": false, 00:05:41.928 "compare": false, 00:05:41.928 "compare_and_write": false, 00:05:41.928 "abort": true, 00:05:41.928 "seek_hole": false, 00:05:41.928 "seek_data": false, 00:05:41.928 "copy": true, 00:05:41.928 "nvme_iov_md": false 00:05:41.928 }, 00:05:41.928 "memory_domains": [ 00:05:41.928 { 00:05:41.928 "dma_device_id": "system", 00:05:41.928 "dma_device_type": 1 00:05:41.928 }, 00:05:41.928 { 00:05:41.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.928 "dma_device_type": 2 00:05:41.928 } 00:05:41.928 ], 00:05:41.928 "driver_specific": { 00:05:41.928 "passthru": { 00:05:41.928 "name": "Passthru0", 00:05:41.928 "base_bdev_name": "Malloc0" 00:05:41.928 } 00:05:41.928 } 00:05:41.928 } 00:05:41.928 ]' 00:05:41.928 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.928 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.928 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.928 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.928 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.928 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.928 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.928 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.928 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.928 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.928 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.928 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.928 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.928 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.928 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.928 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.928 15:47:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.928 00:05:41.928 real 0m0.232s 00:05:41.928 user 0m0.153s 00:05:41.928 sys 0m0.024s 00:05:41.928 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.928 15:47:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.928 ************************************ 00:05:41.928 END TEST rpc_integrity 00:05:41.928 ************************************ 00:05:41.928 15:47:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:41.928 15:47:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.928 15:47:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.928 15:47:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.928 15:47:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.928 ************************************ 00:05:41.928 START TEST rpc_plugins 00:05:41.928 ************************************ 00:05:41.928 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:41.928 15:47:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.928 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.928 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.928 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.928 15:47:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.928 15:47:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.928 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.928 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.187 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.187 15:47:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:42.187 { 00:05:42.187 "name": "Malloc1", 00:05:42.187 "aliases": [ 00:05:42.187 "c8b570e6-8c03-420e-8825-9d5f614c9c95" 00:05:42.187 ], 00:05:42.187 "product_name": "Malloc disk", 00:05:42.187 "block_size": 4096, 00:05:42.187 "num_blocks": 256, 00:05:42.187 "uuid": "c8b570e6-8c03-420e-8825-9d5f614c9c95", 00:05:42.187 "assigned_rate_limits": { 00:05:42.187 "rw_ios_per_sec": 0, 00:05:42.187 "rw_mbytes_per_sec": 0, 00:05:42.187 "r_mbytes_per_sec": 0, 00:05:42.187 "w_mbytes_per_sec": 0 00:05:42.187 }, 00:05:42.187 "claimed": false, 00:05:42.187 "zoned": false, 00:05:42.187 "supported_io_types": { 00:05:42.187 "read": true, 00:05:42.187 "write": true, 00:05:42.187 "unmap": true, 00:05:42.187 "flush": true, 00:05:42.187 "reset": true, 00:05:42.187 "nvme_admin": false, 00:05:42.187 "nvme_io": false, 00:05:42.187 "nvme_io_md": false, 00:05:42.187 "write_zeroes": true, 00:05:42.187 "zcopy": true, 00:05:42.187 "get_zone_info": false, 00:05:42.187 "zone_management": false, 00:05:42.187 "zone_append": false, 00:05:42.187 "compare": false, 00:05:42.187 "compare_and_write": false, 00:05:42.187 "abort": true, 00:05:42.187 "seek_hole": false, 00:05:42.187 "seek_data": false, 00:05:42.187 "copy": true, 00:05:42.187 "nvme_iov_md": false 00:05:42.187 }, 00:05:42.187 "memory_domains": [ 00:05:42.187 { 00:05:42.187 "dma_device_id": "system", 00:05:42.187 "dma_device_type": 1 00:05:42.187 }, 00:05:42.187 { 00:05:42.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.187 "dma_device_type": 2 00:05:42.187 } 00:05:42.187 ], 00:05:42.187 "driver_specific": {} 00:05:42.187 } 00:05:42.187 ]' 00:05:42.187 15:47:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:42.187 15:47:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:42.187 15:47:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:42.187 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.187 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.187 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.187 15:47:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:42.187 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.187 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.187 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.187 15:47:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:42.187 15:47:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:42.187 15:47:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:42.187 00:05:42.187 real 0m0.110s 00:05:42.187 user 0m0.073s 00:05:42.187 sys 0m0.008s 00:05:42.187 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.187 15:47:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.187 ************************************ 00:05:42.187 END TEST rpc_plugins 00:05:42.187 ************************************ 00:05:42.187 15:47:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.187 15:47:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:42.187 15:47:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.187 15:47:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.187 15:47:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.187 ************************************ 00:05:42.187 START TEST rpc_trace_cmd_test 00:05:42.187 ************************************ 00:05:42.187 15:47:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:42.187 15:47:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:42.187 15:47:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.187 15:47:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.187 15:47:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.187 15:47:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.187 15:47:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:42.187 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1030366", 00:05:42.187 "tpoint_group_mask": "0x8", 00:05:42.187 "iscsi_conn": { 00:05:42.187 "mask": "0x2", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 }, 00:05:42.187 "scsi": { 00:05:42.187 "mask": "0x4", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 }, 00:05:42.187 "bdev": { 00:05:42.187 "mask": "0x8", 00:05:42.187 "tpoint_mask": "0xffffffffffffffff" 00:05:42.187 }, 00:05:42.187 "nvmf_rdma": { 00:05:42.187 "mask": "0x10", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 }, 00:05:42.187 "nvmf_tcp": { 00:05:42.187 "mask": "0x20", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 }, 00:05:42.187 "ftl": { 00:05:42.187 "mask": "0x40", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 }, 00:05:42.187 "blobfs": { 00:05:42.187 "mask": "0x80", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 }, 00:05:42.187 "dsa": { 00:05:42.187 "mask": "0x200", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 }, 00:05:42.187 "thread": { 00:05:42.187 "mask": "0x400", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 }, 00:05:42.187 "nvme_pcie": { 00:05:42.187 "mask": "0x800", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 }, 00:05:42.187 "iaa": { 00:05:42.187 "mask": "0x1000", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 }, 00:05:42.187 "nvme_tcp": { 00:05:42.187 "mask": "0x2000", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 }, 00:05:42.187 "bdev_nvme": { 00:05:42.187 "mask": "0x4000", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 }, 00:05:42.187 "sock": { 00:05:42.187 "mask": "0x8000", 00:05:42.187 "tpoint_mask": "0x0" 00:05:42.187 } 00:05:42.187 }' 00:05:42.187 15:47:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:42.187 15:47:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:42.187 15:47:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.187 15:47:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.187 15:47:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.445 15:47:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.445 15:47:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.445 15:47:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.445 15:47:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.445 15:47:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.445 00:05:42.445 real 0m0.204s 00:05:42.445 user 0m0.180s 00:05:42.445 sys 0m0.017s 00:05:42.445 15:47:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.445 15:47:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.445 ************************************ 00:05:42.445 END TEST rpc_trace_cmd_test 00:05:42.445 ************************************ 00:05:42.445 15:47:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.445 15:47:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.445 15:47:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.445 15:47:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.445 15:47:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.445 15:47:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.445 15:47:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.445 ************************************ 00:05:42.445 START TEST rpc_daemon_integrity 00:05:42.445 ************************************ 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.445 { 00:05:42.445 "name": "Malloc2", 00:05:42.445 "aliases": [ 00:05:42.445 "80d71700-7a43-48c4-bfd1-3efc759423d2" 00:05:42.445 ], 00:05:42.445 "product_name": "Malloc disk", 00:05:42.445 "block_size": 512, 00:05:42.445 "num_blocks": 16384, 00:05:42.445 "uuid": "80d71700-7a43-48c4-bfd1-3efc759423d2", 00:05:42.445 "assigned_rate_limits": { 00:05:42.445 "rw_ios_per_sec": 0, 00:05:42.445 "rw_mbytes_per_sec": 0, 00:05:42.445 "r_mbytes_per_sec": 0, 00:05:42.445 "w_mbytes_per_sec": 0 00:05:42.445 }, 00:05:42.445 "claimed": false, 00:05:42.445 "zoned": false, 00:05:42.445 "supported_io_types": { 00:05:42.445 "read": true, 00:05:42.445 "write": true, 00:05:42.445 "unmap": true, 00:05:42.445 "flush": true, 00:05:42.445 "reset": true, 00:05:42.445 "nvme_admin": false, 00:05:42.445 "nvme_io": false, 00:05:42.445 "nvme_io_md": false, 00:05:42.445 "write_zeroes": true, 00:05:42.445 "zcopy": true, 00:05:42.445 "get_zone_info": false, 00:05:42.445 "zone_management": false, 00:05:42.445 "zone_append": false, 00:05:42.445 "compare": false, 00:05:42.445 "compare_and_write": false, 00:05:42.445 "abort": true, 00:05:42.445 "seek_hole": false, 00:05:42.445 "seek_data": false, 00:05:42.445 "copy": true, 00:05:42.445 "nvme_iov_md": false 00:05:42.445 }, 00:05:42.445 "memory_domains": [ 00:05:42.445 { 00:05:42.445 "dma_device_id": "system", 00:05:42.445 "dma_device_type": 1 00:05:42.445 }, 00:05:42.445 { 00:05:42.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.445 "dma_device_type": 2 00:05:42.445 } 00:05:42.445 ], 00:05:42.445 "driver_specific": {} 00:05:42.445 } 00:05:42.445 ]' 00:05:42.445 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.446 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.446 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.446 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.446 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.446 [2024-07-15 15:47:09.361002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.446 [2024-07-15 15:47:09.361042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.446 [2024-07-15 15:47:09.361067] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d99980 00:05:42.446 [2024-07-15 15:47:09.361082] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.446 [2024-07-15 15:47:09.362438] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.446 [2024-07-15 15:47:09.362466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.446 Passthru0 00:05:42.446 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.446 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.446 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.446 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.704 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.704 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.704 { 00:05:42.704 "name": "Malloc2", 00:05:42.704 "aliases": [ 00:05:42.704 "80d71700-7a43-48c4-bfd1-3efc759423d2" 00:05:42.704 ], 00:05:42.704 "product_name": "Malloc disk", 00:05:42.704 "block_size": 512, 00:05:42.704 "num_blocks": 16384, 00:05:42.704 "uuid": "80d71700-7a43-48c4-bfd1-3efc759423d2", 00:05:42.704 "assigned_rate_limits": { 00:05:42.704 "rw_ios_per_sec": 0, 00:05:42.704 "rw_mbytes_per_sec": 0, 00:05:42.704 "r_mbytes_per_sec": 0, 00:05:42.704 "w_mbytes_per_sec": 0 00:05:42.704 }, 00:05:42.704 "claimed": true, 00:05:42.704 "claim_type": "exclusive_write", 00:05:42.704 "zoned": false, 00:05:42.704 "supported_io_types": { 00:05:42.704 "read": true, 00:05:42.704 "write": true, 00:05:42.704 "unmap": true, 00:05:42.704 "flush": true, 00:05:42.704 "reset": true, 00:05:42.704 "nvme_admin": false, 00:05:42.704 "nvme_io": false, 00:05:42.704 "nvme_io_md": false, 00:05:42.704 "write_zeroes": true, 00:05:42.704 "zcopy": true, 00:05:42.704 "get_zone_info": false, 00:05:42.704 "zone_management": false, 00:05:42.704 "zone_append": false, 00:05:42.704 "compare": false, 00:05:42.704 "compare_and_write": false, 00:05:42.704 "abort": true, 00:05:42.704 "seek_hole": false, 00:05:42.704 "seek_data": false, 00:05:42.704 "copy": true, 00:05:42.704 "nvme_iov_md": false 00:05:42.704 }, 00:05:42.704 "memory_domains": [ 00:05:42.704 { 00:05:42.704 "dma_device_id": "system", 00:05:42.704 "dma_device_type": 1 00:05:42.704 }, 00:05:42.704 { 00:05:42.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.704 "dma_device_type": 2 00:05:42.704 } 00:05:42.704 ], 00:05:42.704 "driver_specific": {} 00:05:42.704 }, 00:05:42.704 { 00:05:42.704 "name": "Passthru0", 00:05:42.704 "aliases": [ 00:05:42.704 "ec0e8415-ae90-5e2f-9021-e3cc611274f8" 00:05:42.704 ], 00:05:42.704 "product_name": "passthru", 00:05:42.704 "block_size": 512, 00:05:42.704 "num_blocks": 16384, 00:05:42.704 "uuid": "ec0e8415-ae90-5e2f-9021-e3cc611274f8", 00:05:42.704 "assigned_rate_limits": { 00:05:42.704 "rw_ios_per_sec": 0, 00:05:42.704 "rw_mbytes_per_sec": 0, 00:05:42.704 "r_mbytes_per_sec": 0, 00:05:42.704 "w_mbytes_per_sec": 0 00:05:42.704 }, 00:05:42.704 "claimed": false, 00:05:42.704 "zoned": false, 00:05:42.704 "supported_io_types": { 00:05:42.704 "read": true, 00:05:42.704 "write": true, 00:05:42.704 "unmap": true, 00:05:42.704 "flush": true, 00:05:42.704 "reset": true, 00:05:42.704 "nvme_admin": false, 00:05:42.704 "nvme_io": false, 00:05:42.704 "nvme_io_md": false, 00:05:42.704 "write_zeroes": true, 00:05:42.704 "zcopy": true, 00:05:42.704 "get_zone_info": false, 00:05:42.704 "zone_management": false, 00:05:42.704 "zone_append": false, 00:05:42.704 "compare": false, 00:05:42.705 "compare_and_write": false, 00:05:42.705 "abort": true, 00:05:42.705 "seek_hole": false, 00:05:42.705 "seek_data": false, 00:05:42.705 "copy": true, 00:05:42.705 "nvme_iov_md": false 00:05:42.705 }, 00:05:42.705 "memory_domains": [ 00:05:42.705 { 00:05:42.705 "dma_device_id": "system", 00:05:42.705 "dma_device_type": 1 00:05:42.705 }, 00:05:42.705 { 00:05:42.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.705 "dma_device_type": 2 00:05:42.705 } 00:05:42.705 ], 00:05:42.705 "driver_specific": { 00:05:42.705 "passthru": { 00:05:42.705 "name": "Passthru0", 00:05:42.705 "base_bdev_name": "Malloc2" 00:05:42.705 } 00:05:42.705 } 00:05:42.705 } 00:05:42.705 ]' 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.705 00:05:42.705 real 0m0.234s 00:05:42.705 user 0m0.156s 00:05:42.705 sys 0m0.021s 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.705 15:47:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.705 ************************************ 00:05:42.705 END TEST rpc_daemon_integrity 00:05:42.705 ************************************ 00:05:42.705 15:47:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.705 15:47:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.705 15:47:09 rpc -- rpc/rpc.sh@84 -- # killprocess 1030366 00:05:42.705 15:47:09 rpc -- common/autotest_common.sh@948 -- # '[' -z 1030366 ']' 00:05:42.705 15:47:09 rpc -- common/autotest_common.sh@952 -- # kill -0 1030366 00:05:42.705 15:47:09 rpc -- common/autotest_common.sh@953 -- # uname 00:05:42.705 15:47:09 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.705 15:47:09 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030366 00:05:42.705 15:47:09 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.705 15:47:09 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.705 15:47:09 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030366' 00:05:42.705 killing process with pid 1030366 00:05:42.705 15:47:09 rpc -- common/autotest_common.sh@967 -- # kill 1030366 00:05:42.705 15:47:09 rpc -- common/autotest_common.sh@972 -- # wait 1030366 00:05:43.271 00:05:43.271 real 0m1.989s 00:05:43.271 user 0m2.532s 00:05:43.271 sys 0m0.585s 00:05:43.271 15:47:09 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.271 15:47:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.271 ************************************ 00:05:43.271 END TEST rpc 00:05:43.271 ************************************ 00:05:43.271 15:47:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.271 15:47:10 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.271 15:47:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.271 15:47:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.271 15:47:10 -- common/autotest_common.sh@10 -- # set +x 00:05:43.271 ************************************ 00:05:43.271 START TEST skip_rpc 00:05:43.271 ************************************ 00:05:43.271 15:47:10 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.271 * Looking for test storage... 00:05:43.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.271 15:47:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:43.271 15:47:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:43.271 15:47:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:43.271 15:47:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.271 15:47:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.271 15:47:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.271 ************************************ 00:05:43.271 START TEST skip_rpc 00:05:43.271 ************************************ 00:05:43.271 15:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:43.271 15:47:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1030803 00:05:43.271 15:47:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.271 15:47:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.271 15:47:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.271 [2024-07-15 15:47:10.155525] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:43.271 [2024-07-15 15:47:10.155622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030803 ] 00:05:43.271 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.531 [2024-07-15 15:47:10.216415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.531 [2024-07-15 15:47:10.327484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1030803 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1030803 ']' 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1030803 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030803 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030803' 00:05:48.796 killing process with pid 1030803 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1030803 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1030803 00:05:48.796 00:05:48.796 real 0m5.512s 00:05:48.796 user 0m5.182s 00:05:48.796 sys 0m0.329s 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.796 15:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.796 ************************************ 00:05:48.796 END TEST skip_rpc 00:05:48.796 ************************************ 00:05:48.796 15:47:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:48.796 15:47:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:48.796 15:47:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.796 15:47:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.796 15:47:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.796 ************************************ 00:05:48.796 START TEST skip_rpc_with_json 00:05:48.796 ************************************ 00:05:48.796 15:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:48.796 15:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:48.796 15:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1031493 00:05:48.796 15:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.796 15:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.796 15:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1031493 00:05:48.796 15:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1031493 ']' 00:05:48.796 15:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.796 15:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.796 15:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.796 15:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.796 15:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.796 [2024-07-15 15:47:15.711513] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:48.796 [2024-07-15 15:47:15.711606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031493 ] 00:05:49.055 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.055 [2024-07-15 15:47:15.775564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.055 [2024-07-15 15:47:15.894036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.313 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.313 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:49.313 15:47:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:49.313 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.313 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.313 [2024-07-15 15:47:16.160896] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:49.313 request: 00:05:49.313 { 00:05:49.313 "trtype": "tcp", 00:05:49.313 "method": "nvmf_get_transports", 00:05:49.313 "req_id": 1 00:05:49.313 } 00:05:49.313 Got JSON-RPC error response 00:05:49.313 response: 00:05:49.313 { 00:05:49.313 "code": -19, 00:05:49.313 "message": "No such device" 00:05:49.313 } 00:05:49.313 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:49.313 15:47:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:49.313 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.313 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.314 [2024-07-15 15:47:16.169028] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.314 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.314 15:47:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:49.314 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.314 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.573 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.573 15:47:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:49.573 { 00:05:49.573 "subsystems": [ 00:05:49.573 { 00:05:49.573 "subsystem": "vfio_user_target", 00:05:49.573 "config": null 00:05:49.573 }, 00:05:49.573 { 00:05:49.573 "subsystem": "keyring", 00:05:49.573 "config": [] 00:05:49.573 }, 00:05:49.573 { 00:05:49.573 "subsystem": "iobuf", 00:05:49.573 "config": [ 00:05:49.573 { 00:05:49.573 "method": "iobuf_set_options", 00:05:49.573 "params": { 00:05:49.573 "small_pool_count": 8192, 00:05:49.573 "large_pool_count": 1024, 00:05:49.573 "small_bufsize": 8192, 00:05:49.573 "large_bufsize": 135168 00:05:49.573 } 00:05:49.573 } 00:05:49.573 ] 00:05:49.573 }, 00:05:49.573 { 00:05:49.573 "subsystem": "sock", 00:05:49.573 "config": [ 00:05:49.573 { 00:05:49.573 "method": "sock_set_default_impl", 00:05:49.573 "params": { 00:05:49.573 "impl_name": "posix" 00:05:49.573 } 00:05:49.573 }, 00:05:49.573 { 00:05:49.573 "method": "sock_impl_set_options", 00:05:49.573 "params": { 00:05:49.573 "impl_name": "ssl", 00:05:49.573 "recv_buf_size": 4096, 00:05:49.573 "send_buf_size": 4096, 00:05:49.573 "enable_recv_pipe": true, 00:05:49.573 "enable_quickack": false, 00:05:49.573 "enable_placement_id": 0, 00:05:49.573 "enable_zerocopy_send_server": true, 00:05:49.573 "enable_zerocopy_send_client": false, 00:05:49.573 "zerocopy_threshold": 0, 00:05:49.573 "tls_version": 0, 00:05:49.573 "enable_ktls": false 00:05:49.573 } 00:05:49.573 }, 00:05:49.573 { 00:05:49.573 "method": "sock_impl_set_options", 00:05:49.573 "params": { 00:05:49.573 "impl_name": "posix", 00:05:49.573 "recv_buf_size": 2097152, 00:05:49.573 "send_buf_size": 2097152, 00:05:49.573 "enable_recv_pipe": true, 00:05:49.573 "enable_quickack": false, 00:05:49.573 "enable_placement_id": 0, 00:05:49.573 "enable_zerocopy_send_server": true, 00:05:49.573 "enable_zerocopy_send_client": false, 00:05:49.573 "zerocopy_threshold": 0, 00:05:49.573 "tls_version": 0, 00:05:49.573 "enable_ktls": false 00:05:49.573 } 00:05:49.573 } 00:05:49.573 ] 00:05:49.573 }, 00:05:49.573 { 00:05:49.573 "subsystem": "vmd", 00:05:49.573 "config": [] 00:05:49.573 }, 00:05:49.573 { 00:05:49.573 "subsystem": "accel", 00:05:49.573 "config": [ 00:05:49.573 { 00:05:49.573 "method": "accel_set_options", 00:05:49.573 "params": { 00:05:49.573 "small_cache_size": 128, 00:05:49.573 "large_cache_size": 16, 00:05:49.573 "task_count": 2048, 00:05:49.573 "sequence_count": 2048, 00:05:49.573 "buf_count": 2048 00:05:49.573 } 00:05:49.573 } 00:05:49.573 ] 00:05:49.573 }, 00:05:49.573 { 00:05:49.573 "subsystem": "bdev", 00:05:49.573 "config": [ 00:05:49.573 { 00:05:49.573 "method": "bdev_set_options", 00:05:49.573 "params": { 00:05:49.573 "bdev_io_pool_size": 65535, 00:05:49.573 "bdev_io_cache_size": 256, 00:05:49.573 "bdev_auto_examine": true, 00:05:49.573 "iobuf_small_cache_size": 128, 00:05:49.573 "iobuf_large_cache_size": 16 00:05:49.573 } 00:05:49.573 }, 00:05:49.573 { 00:05:49.573 "method": "bdev_raid_set_options", 00:05:49.573 "params": { 00:05:49.573 "process_window_size_kb": 1024 00:05:49.573 } 00:05:49.573 }, 00:05:49.573 { 00:05:49.573 "method": "bdev_iscsi_set_options", 00:05:49.573 "params": { 00:05:49.573 "timeout_sec": 30 00:05:49.573 } 00:05:49.573 }, 00:05:49.573 { 00:05:49.573 "method": "bdev_nvme_set_options", 00:05:49.573 "params": { 00:05:49.573 "action_on_timeout": "none", 00:05:49.573 "timeout_us": 0, 00:05:49.573 "timeout_admin_us": 0, 00:05:49.573 "keep_alive_timeout_ms": 10000, 00:05:49.573 "arbitration_burst": 0, 00:05:49.573 "low_priority_weight": 0, 00:05:49.573 "medium_priority_weight": 0, 00:05:49.573 "high_priority_weight": 0, 00:05:49.573 "nvme_adminq_poll_period_us": 10000, 00:05:49.573 "nvme_ioq_poll_period_us": 0, 00:05:49.573 "io_queue_requests": 0, 00:05:49.573 "delay_cmd_submit": true, 00:05:49.573 "transport_retry_count": 4, 00:05:49.573 "bdev_retry_count": 3, 00:05:49.574 "transport_ack_timeout": 0, 00:05:49.574 "ctrlr_loss_timeout_sec": 0, 00:05:49.574 "reconnect_delay_sec": 0, 00:05:49.574 "fast_io_fail_timeout_sec": 0, 00:05:49.574 "disable_auto_failback": false, 00:05:49.574 "generate_uuids": false, 00:05:49.574 "transport_tos": 0, 00:05:49.574 "nvme_error_stat": false, 00:05:49.574 "rdma_srq_size": 0, 00:05:49.574 "io_path_stat": false, 00:05:49.574 "allow_accel_sequence": false, 00:05:49.574 "rdma_max_cq_size": 0, 00:05:49.574 "rdma_cm_event_timeout_ms": 0, 00:05:49.574 "dhchap_digests": [ 00:05:49.574 "sha256", 00:05:49.574 "sha384", 00:05:49.574 "sha512" 00:05:49.574 ], 00:05:49.574 "dhchap_dhgroups": [ 00:05:49.574 "null", 00:05:49.574 "ffdhe2048", 00:05:49.574 "ffdhe3072", 00:05:49.574 "ffdhe4096", 00:05:49.574 "ffdhe6144", 00:05:49.574 "ffdhe8192" 00:05:49.574 ] 00:05:49.574 } 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "method": "bdev_nvme_set_hotplug", 00:05:49.574 "params": { 00:05:49.574 "period_us": 100000, 00:05:49.574 "enable": false 00:05:49.574 } 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "method": "bdev_wait_for_examine" 00:05:49.574 } 00:05:49.574 ] 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "subsystem": "scsi", 00:05:49.574 "config": null 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "subsystem": "scheduler", 00:05:49.574 "config": [ 00:05:49.574 { 00:05:49.574 "method": "framework_set_scheduler", 00:05:49.574 "params": { 00:05:49.574 "name": "static" 00:05:49.574 } 00:05:49.574 } 00:05:49.574 ] 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "subsystem": "vhost_scsi", 00:05:49.574 "config": [] 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "subsystem": "vhost_blk", 00:05:49.574 "config": [] 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "subsystem": "ublk", 00:05:49.574 "config": [] 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "subsystem": "nbd", 00:05:49.574 "config": [] 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "subsystem": "nvmf", 00:05:49.574 "config": [ 00:05:49.574 { 00:05:49.574 "method": "nvmf_set_config", 00:05:49.574 "params": { 00:05:49.574 "discovery_filter": "match_any", 00:05:49.574 "admin_cmd_passthru": { 00:05:49.574 "identify_ctrlr": false 00:05:49.574 } 00:05:49.574 } 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "method": "nvmf_set_max_subsystems", 00:05:49.574 "params": { 00:05:49.574 "max_subsystems": 1024 00:05:49.574 } 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "method": "nvmf_set_crdt", 00:05:49.574 "params": { 00:05:49.574 "crdt1": 0, 00:05:49.574 "crdt2": 0, 00:05:49.574 "crdt3": 0 00:05:49.574 } 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "method": "nvmf_create_transport", 00:05:49.574 "params": { 00:05:49.574 "trtype": "TCP", 00:05:49.574 "max_queue_depth": 128, 00:05:49.574 "max_io_qpairs_per_ctrlr": 127, 00:05:49.574 "in_capsule_data_size": 4096, 00:05:49.574 "max_io_size": 131072, 00:05:49.574 "io_unit_size": 131072, 00:05:49.574 "max_aq_depth": 128, 00:05:49.574 "num_shared_buffers": 511, 00:05:49.574 "buf_cache_size": 4294967295, 00:05:49.574 "dif_insert_or_strip": false, 00:05:49.574 "zcopy": false, 00:05:49.574 "c2h_success": true, 00:05:49.574 "sock_priority": 0, 00:05:49.574 "abort_timeout_sec": 1, 00:05:49.574 "ack_timeout": 0, 00:05:49.574 "data_wr_pool_size": 0 00:05:49.574 } 00:05:49.574 } 00:05:49.574 ] 00:05:49.574 }, 00:05:49.574 { 00:05:49.574 "subsystem": "iscsi", 00:05:49.574 "config": [ 00:05:49.574 { 00:05:49.574 "method": "iscsi_set_options", 00:05:49.574 "params": { 00:05:49.574 "node_base": "iqn.2016-06.io.spdk", 00:05:49.574 "max_sessions": 128, 00:05:49.574 "max_connections_per_session": 2, 00:05:49.574 "max_queue_depth": 64, 00:05:49.574 "default_time2wait": 2, 00:05:49.574 "default_time2retain": 20, 00:05:49.574 "first_burst_length": 8192, 00:05:49.574 "immediate_data": true, 00:05:49.574 "allow_duplicated_isid": false, 00:05:49.574 "error_recovery_level": 0, 00:05:49.574 "nop_timeout": 60, 00:05:49.574 "nop_in_interval": 30, 00:05:49.574 "disable_chap": false, 00:05:49.574 "require_chap": false, 00:05:49.574 "mutual_chap": false, 00:05:49.574 "chap_group": 0, 00:05:49.574 "max_large_datain_per_connection": 64, 00:05:49.574 "max_r2t_per_connection": 4, 00:05:49.574 "pdu_pool_size": 36864, 00:05:49.574 "immediate_data_pool_size": 16384, 00:05:49.574 "data_out_pool_size": 2048 00:05:49.574 } 00:05:49.574 } 00:05:49.574 ] 00:05:49.574 } 00:05:49.574 ] 00:05:49.574 } 00:05:49.574 15:47:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:49.574 15:47:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1031493 00:05:49.574 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1031493 ']' 00:05:49.574 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1031493 00:05:49.574 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:49.574 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.574 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1031493 00:05:49.574 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.574 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.574 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1031493' 00:05:49.574 killing process with pid 1031493 00:05:49.574 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1031493 00:05:49.574 15:47:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1031493 00:05:50.141 15:47:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1031633 00:05:50.141 15:47:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.141 15:47:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:55.437 15:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1031633 00:05:55.437 15:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1031633 ']' 00:05:55.437 15:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1031633 00:05:55.437 15:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:55.437 15:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.437 15:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1031633 00:05:55.437 15:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.437 15:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.437 15:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1031633' 00:05:55.437 killing process with pid 1031633 00:05:55.437 15:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1031633 00:05:55.437 15:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1031633 00:05:55.437 15:47:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:55.437 15:47:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:55.437 00:05:55.437 real 0m6.647s 00:05:55.437 user 0m6.245s 00:05:55.437 sys 0m0.722s 00:05:55.437 15:47:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.437 15:47:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.437 ************************************ 00:05:55.437 END TEST skip_rpc_with_json 00:05:55.437 ************************************ 00:05:55.437 15:47:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.437 15:47:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:55.438 15:47:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.438 15:47:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.438 15:47:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.438 ************************************ 00:05:55.438 START TEST skip_rpc_with_delay 00:05:55.438 ************************************ 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:55.438 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.696 [2024-07-15 15:47:22.412061] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:55.696 [2024-07-15 15:47:22.412186] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:55.696 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:55.696 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.696 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.696 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.696 00:05:55.696 real 0m0.076s 00:05:55.696 user 0m0.048s 00:05:55.696 sys 0m0.028s 00:05:55.696 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.696 15:47:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:55.696 ************************************ 00:05:55.696 END TEST skip_rpc_with_delay 00:05:55.696 ************************************ 00:05:55.696 15:47:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.696 15:47:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:55.696 15:47:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:55.696 15:47:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:55.696 15:47:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.696 15:47:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.696 15:47:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.696 ************************************ 00:05:55.696 START TEST exit_on_failed_rpc_init 00:05:55.696 ************************************ 00:05:55.696 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:55.696 15:47:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1032346 00:05:55.696 15:47:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.696 15:47:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1032346 00:05:55.696 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1032346 ']' 00:05:55.696 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.696 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.696 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.696 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.696 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.696 [2024-07-15 15:47:22.528739] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:55.696 [2024-07-15 15:47:22.528839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032346 ] 00:05:55.696 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.696 [2024-07-15 15:47:22.585742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.954 [2024-07-15 15:47:22.695481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:56.215 15:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.215 [2024-07-15 15:47:23.007662] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:56.215 [2024-07-15 15:47:23.007758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032362 ] 00:05:56.215 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.215 [2024-07-15 15:47:23.068742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.474 [2024-07-15 15:47:23.188594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.474 [2024-07-15 15:47:23.188715] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:56.474 [2024-07-15 15:47:23.188742] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:56.474 [2024-07-15 15:47:23.188756] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1032346 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1032346 ']' 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1032346 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1032346 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1032346' 00:05:56.474 killing process with pid 1032346 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1032346 00:05:56.474 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1032346 00:05:57.042 00:05:57.042 real 0m1.328s 00:05:57.042 user 0m1.507s 00:05:57.042 sys 0m0.443s 00:05:57.042 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.042 15:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.042 ************************************ 00:05:57.042 END TEST exit_on_failed_rpc_init 00:05:57.042 ************************************ 00:05:57.042 15:47:23 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:57.042 15:47:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:57.042 00:05:57.042 real 0m13.802s 00:05:57.042 user 0m13.079s 00:05:57.042 sys 0m1.678s 00:05:57.042 15:47:23 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.042 15:47:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.042 ************************************ 00:05:57.042 END TEST skip_rpc 00:05:57.042 ************************************ 00:05:57.042 15:47:23 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.042 15:47:23 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:57.042 15:47:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.042 15:47:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.042 15:47:23 -- common/autotest_common.sh@10 -- # set +x 00:05:57.042 ************************************ 00:05:57.042 START TEST rpc_client 00:05:57.042 ************************************ 00:05:57.042 15:47:23 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:57.042 * Looking for test storage... 00:05:57.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:57.042 15:47:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:57.042 OK 00:05:57.042 15:47:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:57.042 00:05:57.042 real 0m0.069s 00:05:57.042 user 0m0.031s 00:05:57.042 sys 0m0.043s 00:05:57.042 15:47:23 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.042 15:47:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:57.042 ************************************ 00:05:57.042 END TEST rpc_client 00:05:57.042 ************************************ 00:05:57.043 15:47:23 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.043 15:47:23 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:57.043 15:47:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.043 15:47:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.043 15:47:23 -- common/autotest_common.sh@10 -- # set +x 00:05:57.301 ************************************ 00:05:57.301 START TEST json_config 00:05:57.301 ************************************ 00:05:57.301 15:47:23 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:57.301 15:47:24 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.301 15:47:24 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.301 15:47:24 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.301 15:47:24 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.301 15:47:24 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.302 15:47:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.302 15:47:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.302 15:47:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.302 15:47:24 json_config -- paths/export.sh@5 -- # export PATH 00:05:57.302 15:47:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.302 15:47:24 json_config -- nvmf/common.sh@47 -- # : 0 00:05:57.302 15:47:24 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:57.302 15:47:24 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:57.302 15:47:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.302 15:47:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.302 15:47:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.302 15:47:24 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:57.302 15:47:24 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:57.302 15:47:24 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:57.302 INFO: JSON configuration test init 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:57.302 15:47:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.302 15:47:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:57.302 15:47:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.302 15:47:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.302 15:47:24 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:57.302 15:47:24 json_config -- json_config/common.sh@9 -- # local app=target 00:05:57.302 15:47:24 json_config -- json_config/common.sh@10 -- # shift 00:05:57.302 15:47:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.302 15:47:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.302 15:47:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.302 15:47:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.302 15:47:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.302 15:47:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1032604 00:05:57.302 15:47:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:57.302 15:47:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.302 Waiting for target to run... 00:05:57.302 15:47:24 json_config -- json_config/common.sh@25 -- # waitforlisten 1032604 /var/tmp/spdk_tgt.sock 00:05:57.302 15:47:24 json_config -- common/autotest_common.sh@829 -- # '[' -z 1032604 ']' 00:05:57.302 15:47:24 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.302 15:47:24 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.302 15:47:24 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.302 15:47:24 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.302 15:47:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.302 [2024-07-15 15:47:24.104966] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:57.302 [2024-07-15 15:47:24.105070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032604 ] 00:05:57.302 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.560 [2024-07-15 15:47:24.449920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.820 [2024-07-15 15:47:24.543175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.391 15:47:25 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.391 15:47:25 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:58.391 15:47:25 json_config -- json_config/common.sh@26 -- # echo '' 00:05:58.391 00:05:58.391 15:47:25 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:58.391 15:47:25 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:58.391 15:47:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.391 15:47:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.391 15:47:25 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:58.391 15:47:25 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:58.391 15:47:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.391 15:47:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.391 15:47:25 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:58.391 15:47:25 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:58.391 15:47:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:01.676 15:47:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.676 15:47:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:01.676 15:47:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:01.676 15:47:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.676 15:47:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:01.676 15:47:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.676 15:47:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:01.676 15:47:28 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.676 15:47:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.934 MallocForNvmf0 00:06:01.934 15:47:28 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:01.934 15:47:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:02.192 MallocForNvmf1 00:06:02.192 15:47:28 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:02.192 15:47:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:02.449 [2024-07-15 15:47:29.223374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.449 15:47:29 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:02.450 15:47:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:02.707 15:47:29 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:02.707 15:47:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:02.965 15:47:29 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:02.965 15:47:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:03.222 15:47:29 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:03.222 15:47:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:03.479 [2024-07-15 15:47:30.210693] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:03.479 15:47:30 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:03.479 15:47:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.479 15:47:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.479 15:47:30 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:03.479 15:47:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.479 15:47:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.479 15:47:30 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:03.479 15:47:30 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.479 15:47:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.737 MallocBdevForConfigChangeCheck 00:06:03.737 15:47:30 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:03.737 15:47:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.737 15:47:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.737 15:47:30 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:03.737 15:47:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.995 15:47:30 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:03.995 INFO: shutting down applications... 00:06:03.995 15:47:30 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:03.995 15:47:30 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:03.995 15:47:30 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:03.995 15:47:30 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:05.925 Calling clear_iscsi_subsystem 00:06:05.925 Calling clear_nvmf_subsystem 00:06:05.925 Calling clear_nbd_subsystem 00:06:05.925 Calling clear_ublk_subsystem 00:06:05.925 Calling clear_vhost_blk_subsystem 00:06:05.925 Calling clear_vhost_scsi_subsystem 00:06:05.925 Calling clear_bdev_subsystem 00:06:05.925 15:47:32 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:05.925 15:47:32 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:05.925 15:47:32 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:05.925 15:47:32 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.925 15:47:32 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:05.925 15:47:32 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:06.185 15:47:32 json_config -- json_config/json_config.sh@345 -- # break 00:06:06.185 15:47:32 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:06.185 15:47:32 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:06.185 15:47:32 json_config -- json_config/common.sh@31 -- # local app=target 00:06:06.185 15:47:32 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:06.185 15:47:32 json_config -- json_config/common.sh@35 -- # [[ -n 1032604 ]] 00:06:06.185 15:47:32 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1032604 00:06:06.185 15:47:32 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:06.185 15:47:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.185 15:47:32 json_config -- json_config/common.sh@41 -- # kill -0 1032604 00:06:06.185 15:47:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.752 15:47:33 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.752 15:47:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.752 15:47:33 json_config -- json_config/common.sh@41 -- # kill -0 1032604 00:06:06.752 15:47:33 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:06.752 15:47:33 json_config -- json_config/common.sh@43 -- # break 00:06:06.752 15:47:33 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:06.752 15:47:33 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:06.752 SPDK target shutdown done 00:06:06.752 15:47:33 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:06.752 INFO: relaunching applications... 00:06:06.752 15:47:33 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.752 15:47:33 json_config -- json_config/common.sh@9 -- # local app=target 00:06:06.752 15:47:33 json_config -- json_config/common.sh@10 -- # shift 00:06:06.752 15:47:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.752 15:47:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.752 15:47:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.752 15:47:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.752 15:47:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.752 15:47:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1033911 00:06:06.752 15:47:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.752 15:47:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.752 Waiting for target to run... 00:06:06.752 15:47:33 json_config -- json_config/common.sh@25 -- # waitforlisten 1033911 /var/tmp/spdk_tgt.sock 00:06:06.752 15:47:33 json_config -- common/autotest_common.sh@829 -- # '[' -z 1033911 ']' 00:06:06.752 15:47:33 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.752 15:47:33 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.752 15:47:33 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.752 15:47:33 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.752 15:47:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.752 [2024-07-15 15:47:33.510957] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:06.752 [2024-07-15 15:47:33.511043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033911 ] 00:06:06.752 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.316 [2024-07-15 15:47:34.044066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.316 [2024-07-15 15:47:34.149970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.605 [2024-07-15 15:47:37.192982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.605 [2024-07-15 15:47:37.225431] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:11.171 15:47:37 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.171 15:47:37 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:11.171 15:47:37 json_config -- json_config/common.sh@26 -- # echo '' 00:06:11.171 00:06:11.171 15:47:37 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:11.171 15:47:37 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:11.171 INFO: Checking if target configuration is the same... 00:06:11.171 15:47:37 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.171 15:47:37 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:11.171 15:47:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.171 + '[' 2 -ne 2 ']' 00:06:11.171 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:11.171 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:11.171 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:11.171 +++ basename /dev/fd/62 00:06:11.171 ++ mktemp /tmp/62.XXX 00:06:11.171 + tmp_file_1=/tmp/62.rNe 00:06:11.171 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.171 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:11.171 + tmp_file_2=/tmp/spdk_tgt_config.json.IX7 00:06:11.171 + ret=0 00:06:11.171 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.429 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.429 + diff -u /tmp/62.rNe /tmp/spdk_tgt_config.json.IX7 00:06:11.429 + echo 'INFO: JSON config files are the same' 00:06:11.429 INFO: JSON config files are the same 00:06:11.429 + rm /tmp/62.rNe /tmp/spdk_tgt_config.json.IX7 00:06:11.429 + exit 0 00:06:11.429 15:47:38 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:11.429 15:47:38 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:11.429 INFO: changing configuration and checking if this can be detected... 00:06:11.429 15:47:38 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:11.429 15:47:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:11.687 15:47:38 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.687 15:47:38 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:11.687 15:47:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.687 + '[' 2 -ne 2 ']' 00:06:11.687 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:11.687 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:11.687 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:11.687 +++ basename /dev/fd/62 00:06:11.687 ++ mktemp /tmp/62.XXX 00:06:11.687 + tmp_file_1=/tmp/62.onl 00:06:11.687 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.687 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:11.687 + tmp_file_2=/tmp/spdk_tgt_config.json.tNA 00:06:11.687 + ret=0 00:06:11.687 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.258 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.258 + diff -u /tmp/62.onl /tmp/spdk_tgt_config.json.tNA 00:06:12.258 + ret=1 00:06:12.258 + echo '=== Start of file: /tmp/62.onl ===' 00:06:12.258 + cat /tmp/62.onl 00:06:12.258 + echo '=== End of file: /tmp/62.onl ===' 00:06:12.258 + echo '' 00:06:12.258 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tNA ===' 00:06:12.258 + cat /tmp/spdk_tgt_config.json.tNA 00:06:12.258 + echo '=== End of file: /tmp/spdk_tgt_config.json.tNA ===' 00:06:12.258 + echo '' 00:06:12.258 + rm /tmp/62.onl /tmp/spdk_tgt_config.json.tNA 00:06:12.258 + exit 1 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:12.258 INFO: configuration change detected. 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@317 -- # [[ -n 1033911 ]] 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.258 15:47:39 json_config -- json_config/json_config.sh@323 -- # killprocess 1033911 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@948 -- # '[' -z 1033911 ']' 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@952 -- # kill -0 1033911 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@953 -- # uname 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1033911 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1033911' 00:06:12.258 killing process with pid 1033911 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@967 -- # kill 1033911 00:06:12.258 15:47:39 json_config -- common/autotest_common.sh@972 -- # wait 1033911 00:06:14.165 15:47:40 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.165 15:47:40 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:14.165 15:47:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.165 15:47:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.165 15:47:40 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:14.165 15:47:40 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:14.165 INFO: Success 00:06:14.165 00:06:14.165 real 0m16.764s 00:06:14.165 user 0m18.709s 00:06:14.165 sys 0m2.079s 00:06:14.165 15:47:40 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.165 15:47:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.165 ************************************ 00:06:14.165 END TEST json_config 00:06:14.165 ************************************ 00:06:14.165 15:47:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.165 15:47:40 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:14.165 15:47:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.165 15:47:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.165 15:47:40 -- common/autotest_common.sh@10 -- # set +x 00:06:14.165 ************************************ 00:06:14.165 START TEST json_config_extra_key 00:06:14.165 ************************************ 00:06:14.165 15:47:40 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:14.165 15:47:40 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.165 15:47:40 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.165 15:47:40 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.165 15:47:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.165 15:47:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.165 15:47:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.165 15:47:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:14.165 15:47:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:14.165 15:47:40 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:14.165 INFO: launching applications... 00:06:14.165 15:47:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:14.165 15:47:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:14.165 15:47:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:14.165 15:47:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:14.165 15:47:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:14.165 15:47:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:14.165 15:47:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.165 15:47:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.165 15:47:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1034836 00:06:14.165 15:47:40 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:14.165 15:47:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:14.165 Waiting for target to run... 00:06:14.165 15:47:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1034836 /var/tmp/spdk_tgt.sock 00:06:14.165 15:47:40 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1034836 ']' 00:06:14.165 15:47:40 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:14.165 15:47:40 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.165 15:47:40 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:14.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:14.165 15:47:40 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.165 15:47:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:14.165 [2024-07-15 15:47:40.915426] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:14.165 [2024-07-15 15:47:40.915513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034836 ] 00:06:14.165 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.730 [2024-07-15 15:47:41.393364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.730 [2024-07-15 15:47:41.502654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.008 15:47:41 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.008 15:47:41 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:15.009 15:47:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:15.009 00:06:15.009 15:47:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:15.009 INFO: shutting down applications... 00:06:15.009 15:47:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:15.009 15:47:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:15.009 15:47:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:15.009 15:47:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1034836 ]] 00:06:15.009 15:47:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1034836 00:06:15.009 15:47:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:15.009 15:47:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.009 15:47:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1034836 00:06:15.009 15:47:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.581 15:47:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.581 15:47:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.581 15:47:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1034836 00:06:15.581 15:47:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:15.581 15:47:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:15.581 15:47:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:15.581 15:47:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:15.581 SPDK target shutdown done 00:06:15.581 15:47:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:15.581 Success 00:06:15.581 00:06:15.581 real 0m1.586s 00:06:15.581 user 0m1.489s 00:06:15.581 sys 0m0.577s 00:06:15.581 15:47:42 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.581 15:47:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.581 ************************************ 00:06:15.581 END TEST json_config_extra_key 00:06:15.581 ************************************ 00:06:15.581 15:47:42 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.581 15:47:42 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.581 15:47:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.581 15:47:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.581 15:47:42 -- common/autotest_common.sh@10 -- # set +x 00:06:15.581 ************************************ 00:06:15.581 START TEST alias_rpc 00:06:15.581 ************************************ 00:06:15.581 15:47:42 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.581 * Looking for test storage... 00:06:15.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:15.581 15:47:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:15.581 15:47:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1035142 00:06:15.581 15:47:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.581 15:47:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1035142 00:06:15.581 15:47:42 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1035142 ']' 00:06:15.581 15:47:42 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.581 15:47:42 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.581 15:47:42 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.581 15:47:42 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.581 15:47:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.840 [2024-07-15 15:47:42.544406] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:15.840 [2024-07-15 15:47:42.544485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035142 ] 00:06:15.840 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.840 [2024-07-15 15:47:42.604388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.840 [2024-07-15 15:47:42.721645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.784 15:47:43 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.784 15:47:43 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:16.784 15:47:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:17.042 15:47:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1035142 00:06:17.042 15:47:43 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1035142 ']' 00:06:17.042 15:47:43 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1035142 00:06:17.042 15:47:43 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:17.042 15:47:43 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.042 15:47:43 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1035142 00:06:17.042 15:47:43 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.042 15:47:43 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.042 15:47:43 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1035142' 00:06:17.042 killing process with pid 1035142 00:06:17.042 15:47:43 alias_rpc -- common/autotest_common.sh@967 -- # kill 1035142 00:06:17.042 15:47:43 alias_rpc -- common/autotest_common.sh@972 -- # wait 1035142 00:06:17.321 00:06:17.321 real 0m1.793s 00:06:17.321 user 0m2.055s 00:06:17.321 sys 0m0.457s 00:06:17.321 15:47:44 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.321 15:47:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.321 ************************************ 00:06:17.321 END TEST alias_rpc 00:06:17.321 ************************************ 00:06:17.579 15:47:44 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.579 15:47:44 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:17.579 15:47:44 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:17.579 15:47:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.579 15:47:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.579 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:06:17.579 ************************************ 00:06:17.579 START TEST spdkcli_tcp 00:06:17.579 ************************************ 00:06:17.579 15:47:44 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:17.579 * Looking for test storage... 00:06:17.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:17.579 15:47:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:17.579 15:47:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:17.579 15:47:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:17.579 15:47:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:17.579 15:47:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:17.579 15:47:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:17.579 15:47:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:17.579 15:47:44 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:17.579 15:47:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.579 15:47:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1035342 00:06:17.579 15:47:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:17.579 15:47:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1035342 00:06:17.579 15:47:44 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1035342 ']' 00:06:17.579 15:47:44 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.579 15:47:44 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.579 15:47:44 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.579 15:47:44 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.579 15:47:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.579 [2024-07-15 15:47:44.392804] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:17.579 [2024-07-15 15:47:44.392911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035342 ] 00:06:17.579 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.579 [2024-07-15 15:47:44.454195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.838 [2024-07-15 15:47:44.579232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.838 [2024-07-15 15:47:44.579238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.405 15:47:45 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.405 15:47:45 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:18.405 15:47:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1035475 00:06:18.405 15:47:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:18.664 15:47:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:18.664 [ 00:06:18.664 "bdev_malloc_delete", 00:06:18.664 "bdev_malloc_create", 00:06:18.664 "bdev_null_resize", 00:06:18.664 "bdev_null_delete", 00:06:18.664 "bdev_null_create", 00:06:18.664 "bdev_nvme_cuse_unregister", 00:06:18.664 "bdev_nvme_cuse_register", 00:06:18.664 "bdev_opal_new_user", 00:06:18.664 "bdev_opal_set_lock_state", 00:06:18.664 "bdev_opal_delete", 00:06:18.664 "bdev_opal_get_info", 00:06:18.664 "bdev_opal_create", 00:06:18.664 "bdev_nvme_opal_revert", 00:06:18.664 "bdev_nvme_opal_init", 00:06:18.664 "bdev_nvme_send_cmd", 00:06:18.664 "bdev_nvme_get_path_iostat", 00:06:18.664 "bdev_nvme_get_mdns_discovery_info", 00:06:18.664 "bdev_nvme_stop_mdns_discovery", 00:06:18.664 "bdev_nvme_start_mdns_discovery", 00:06:18.664 "bdev_nvme_set_multipath_policy", 00:06:18.664 "bdev_nvme_set_preferred_path", 00:06:18.664 "bdev_nvme_get_io_paths", 00:06:18.664 "bdev_nvme_remove_error_injection", 00:06:18.664 "bdev_nvme_add_error_injection", 00:06:18.664 "bdev_nvme_get_discovery_info", 00:06:18.664 "bdev_nvme_stop_discovery", 00:06:18.664 "bdev_nvme_start_discovery", 00:06:18.664 "bdev_nvme_get_controller_health_info", 00:06:18.664 "bdev_nvme_disable_controller", 00:06:18.664 "bdev_nvme_enable_controller", 00:06:18.664 "bdev_nvme_reset_controller", 00:06:18.664 "bdev_nvme_get_transport_statistics", 00:06:18.664 "bdev_nvme_apply_firmware", 00:06:18.664 "bdev_nvme_detach_controller", 00:06:18.664 "bdev_nvme_get_controllers", 00:06:18.664 "bdev_nvme_attach_controller", 00:06:18.664 "bdev_nvme_set_hotplug", 00:06:18.664 "bdev_nvme_set_options", 00:06:18.664 "bdev_passthru_delete", 00:06:18.664 "bdev_passthru_create", 00:06:18.664 "bdev_lvol_set_parent_bdev", 00:06:18.664 "bdev_lvol_set_parent", 00:06:18.664 "bdev_lvol_check_shallow_copy", 00:06:18.664 "bdev_lvol_start_shallow_copy", 00:06:18.664 "bdev_lvol_grow_lvstore", 00:06:18.664 "bdev_lvol_get_lvols", 00:06:18.664 "bdev_lvol_get_lvstores", 00:06:18.664 "bdev_lvol_delete", 00:06:18.665 "bdev_lvol_set_read_only", 00:06:18.665 "bdev_lvol_resize", 00:06:18.665 "bdev_lvol_decouple_parent", 00:06:18.665 "bdev_lvol_inflate", 00:06:18.665 "bdev_lvol_rename", 00:06:18.665 "bdev_lvol_clone_bdev", 00:06:18.665 "bdev_lvol_clone", 00:06:18.665 "bdev_lvol_snapshot", 00:06:18.665 "bdev_lvol_create", 00:06:18.665 "bdev_lvol_delete_lvstore", 00:06:18.665 "bdev_lvol_rename_lvstore", 00:06:18.665 "bdev_lvol_create_lvstore", 00:06:18.665 "bdev_raid_set_options", 00:06:18.665 "bdev_raid_remove_base_bdev", 00:06:18.665 "bdev_raid_add_base_bdev", 00:06:18.665 "bdev_raid_delete", 00:06:18.665 "bdev_raid_create", 00:06:18.665 "bdev_raid_get_bdevs", 00:06:18.665 "bdev_error_inject_error", 00:06:18.665 "bdev_error_delete", 00:06:18.665 "bdev_error_create", 00:06:18.665 "bdev_split_delete", 00:06:18.665 "bdev_split_create", 00:06:18.665 "bdev_delay_delete", 00:06:18.665 "bdev_delay_create", 00:06:18.665 "bdev_delay_update_latency", 00:06:18.665 "bdev_zone_block_delete", 00:06:18.665 "bdev_zone_block_create", 00:06:18.665 "blobfs_create", 00:06:18.665 "blobfs_detect", 00:06:18.665 "blobfs_set_cache_size", 00:06:18.665 "bdev_aio_delete", 00:06:18.665 "bdev_aio_rescan", 00:06:18.665 "bdev_aio_create", 00:06:18.665 "bdev_ftl_set_property", 00:06:18.665 "bdev_ftl_get_properties", 00:06:18.665 "bdev_ftl_get_stats", 00:06:18.665 "bdev_ftl_unmap", 00:06:18.665 "bdev_ftl_unload", 00:06:18.665 "bdev_ftl_delete", 00:06:18.665 "bdev_ftl_load", 00:06:18.665 "bdev_ftl_create", 00:06:18.665 "bdev_virtio_attach_controller", 00:06:18.665 "bdev_virtio_scsi_get_devices", 00:06:18.665 "bdev_virtio_detach_controller", 00:06:18.665 "bdev_virtio_blk_set_hotplug", 00:06:18.665 "bdev_iscsi_delete", 00:06:18.665 "bdev_iscsi_create", 00:06:18.665 "bdev_iscsi_set_options", 00:06:18.665 "accel_error_inject_error", 00:06:18.665 "ioat_scan_accel_module", 00:06:18.665 "dsa_scan_accel_module", 00:06:18.665 "iaa_scan_accel_module", 00:06:18.665 "vfu_virtio_create_scsi_endpoint", 00:06:18.665 "vfu_virtio_scsi_remove_target", 00:06:18.665 "vfu_virtio_scsi_add_target", 00:06:18.665 "vfu_virtio_create_blk_endpoint", 00:06:18.665 "vfu_virtio_delete_endpoint", 00:06:18.665 "keyring_file_remove_key", 00:06:18.665 "keyring_file_add_key", 00:06:18.665 "keyring_linux_set_options", 00:06:18.665 "iscsi_get_histogram", 00:06:18.665 "iscsi_enable_histogram", 00:06:18.665 "iscsi_set_options", 00:06:18.665 "iscsi_get_auth_groups", 00:06:18.665 "iscsi_auth_group_remove_secret", 00:06:18.665 "iscsi_auth_group_add_secret", 00:06:18.665 "iscsi_delete_auth_group", 00:06:18.665 "iscsi_create_auth_group", 00:06:18.665 "iscsi_set_discovery_auth", 00:06:18.665 "iscsi_get_options", 00:06:18.665 "iscsi_target_node_request_logout", 00:06:18.665 "iscsi_target_node_set_redirect", 00:06:18.665 "iscsi_target_node_set_auth", 00:06:18.665 "iscsi_target_node_add_lun", 00:06:18.665 "iscsi_get_stats", 00:06:18.665 "iscsi_get_connections", 00:06:18.665 "iscsi_portal_group_set_auth", 00:06:18.665 "iscsi_start_portal_group", 00:06:18.665 "iscsi_delete_portal_group", 00:06:18.665 "iscsi_create_portal_group", 00:06:18.665 "iscsi_get_portal_groups", 00:06:18.665 "iscsi_delete_target_node", 00:06:18.665 "iscsi_target_node_remove_pg_ig_maps", 00:06:18.665 "iscsi_target_node_add_pg_ig_maps", 00:06:18.665 "iscsi_create_target_node", 00:06:18.665 "iscsi_get_target_nodes", 00:06:18.665 "iscsi_delete_initiator_group", 00:06:18.665 "iscsi_initiator_group_remove_initiators", 00:06:18.665 "iscsi_initiator_group_add_initiators", 00:06:18.665 "iscsi_create_initiator_group", 00:06:18.665 "iscsi_get_initiator_groups", 00:06:18.665 "nvmf_set_crdt", 00:06:18.665 "nvmf_set_config", 00:06:18.665 "nvmf_set_max_subsystems", 00:06:18.665 "nvmf_stop_mdns_prr", 00:06:18.665 "nvmf_publish_mdns_prr", 00:06:18.665 "nvmf_subsystem_get_listeners", 00:06:18.665 "nvmf_subsystem_get_qpairs", 00:06:18.665 "nvmf_subsystem_get_controllers", 00:06:18.665 "nvmf_get_stats", 00:06:18.665 "nvmf_get_transports", 00:06:18.665 "nvmf_create_transport", 00:06:18.665 "nvmf_get_targets", 00:06:18.665 "nvmf_delete_target", 00:06:18.665 "nvmf_create_target", 00:06:18.665 "nvmf_subsystem_allow_any_host", 00:06:18.665 "nvmf_subsystem_remove_host", 00:06:18.665 "nvmf_subsystem_add_host", 00:06:18.665 "nvmf_ns_remove_host", 00:06:18.665 "nvmf_ns_add_host", 00:06:18.665 "nvmf_subsystem_remove_ns", 00:06:18.665 "nvmf_subsystem_add_ns", 00:06:18.665 "nvmf_subsystem_listener_set_ana_state", 00:06:18.665 "nvmf_discovery_get_referrals", 00:06:18.665 "nvmf_discovery_remove_referral", 00:06:18.665 "nvmf_discovery_add_referral", 00:06:18.665 "nvmf_subsystem_remove_listener", 00:06:18.665 "nvmf_subsystem_add_listener", 00:06:18.665 "nvmf_delete_subsystem", 00:06:18.665 "nvmf_create_subsystem", 00:06:18.665 "nvmf_get_subsystems", 00:06:18.665 "env_dpdk_get_mem_stats", 00:06:18.665 "nbd_get_disks", 00:06:18.665 "nbd_stop_disk", 00:06:18.665 "nbd_start_disk", 00:06:18.665 "ublk_recover_disk", 00:06:18.665 "ublk_get_disks", 00:06:18.665 "ublk_stop_disk", 00:06:18.665 "ublk_start_disk", 00:06:18.665 "ublk_destroy_target", 00:06:18.665 "ublk_create_target", 00:06:18.665 "virtio_blk_create_transport", 00:06:18.665 "virtio_blk_get_transports", 00:06:18.665 "vhost_controller_set_coalescing", 00:06:18.665 "vhost_get_controllers", 00:06:18.665 "vhost_delete_controller", 00:06:18.665 "vhost_create_blk_controller", 00:06:18.665 "vhost_scsi_controller_remove_target", 00:06:18.665 "vhost_scsi_controller_add_target", 00:06:18.665 "vhost_start_scsi_controller", 00:06:18.665 "vhost_create_scsi_controller", 00:06:18.665 "thread_set_cpumask", 00:06:18.665 "framework_get_governor", 00:06:18.665 "framework_get_scheduler", 00:06:18.665 "framework_set_scheduler", 00:06:18.665 "framework_get_reactors", 00:06:18.665 "thread_get_io_channels", 00:06:18.665 "thread_get_pollers", 00:06:18.665 "thread_get_stats", 00:06:18.666 "framework_monitor_context_switch", 00:06:18.666 "spdk_kill_instance", 00:06:18.666 "log_enable_timestamps", 00:06:18.666 "log_get_flags", 00:06:18.666 "log_clear_flag", 00:06:18.666 "log_set_flag", 00:06:18.666 "log_get_level", 00:06:18.666 "log_set_level", 00:06:18.666 "log_get_print_level", 00:06:18.666 "log_set_print_level", 00:06:18.666 "framework_enable_cpumask_locks", 00:06:18.666 "framework_disable_cpumask_locks", 00:06:18.666 "framework_wait_init", 00:06:18.666 "framework_start_init", 00:06:18.666 "scsi_get_devices", 00:06:18.666 "bdev_get_histogram", 00:06:18.666 "bdev_enable_histogram", 00:06:18.666 "bdev_set_qos_limit", 00:06:18.666 "bdev_set_qd_sampling_period", 00:06:18.666 "bdev_get_bdevs", 00:06:18.666 "bdev_reset_iostat", 00:06:18.666 "bdev_get_iostat", 00:06:18.666 "bdev_examine", 00:06:18.666 "bdev_wait_for_examine", 00:06:18.666 "bdev_set_options", 00:06:18.666 "notify_get_notifications", 00:06:18.666 "notify_get_types", 00:06:18.666 "accel_get_stats", 00:06:18.666 "accel_set_options", 00:06:18.666 "accel_set_driver", 00:06:18.666 "accel_crypto_key_destroy", 00:06:18.666 "accel_crypto_keys_get", 00:06:18.666 "accel_crypto_key_create", 00:06:18.666 "accel_assign_opc", 00:06:18.666 "accel_get_module_info", 00:06:18.666 "accel_get_opc_assignments", 00:06:18.666 "vmd_rescan", 00:06:18.666 "vmd_remove_device", 00:06:18.666 "vmd_enable", 00:06:18.666 "sock_get_default_impl", 00:06:18.666 "sock_set_default_impl", 00:06:18.666 "sock_impl_set_options", 00:06:18.666 "sock_impl_get_options", 00:06:18.666 "iobuf_get_stats", 00:06:18.666 "iobuf_set_options", 00:06:18.666 "keyring_get_keys", 00:06:18.666 "framework_get_pci_devices", 00:06:18.666 "framework_get_config", 00:06:18.666 "framework_get_subsystems", 00:06:18.666 "vfu_tgt_set_base_path", 00:06:18.666 "trace_get_info", 00:06:18.666 "trace_get_tpoint_group_mask", 00:06:18.666 "trace_disable_tpoint_group", 00:06:18.666 "trace_enable_tpoint_group", 00:06:18.666 "trace_clear_tpoint_mask", 00:06:18.666 "trace_set_tpoint_mask", 00:06:18.666 "spdk_get_version", 00:06:18.666 "rpc_get_methods" 00:06:18.666 ] 00:06:18.666 15:47:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:18.666 15:47:45 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.666 15:47:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.926 15:47:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:18.926 15:47:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1035342 00:06:18.926 15:47:45 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1035342 ']' 00:06:18.926 15:47:45 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1035342 00:06:18.926 15:47:45 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:18.926 15:47:45 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.926 15:47:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1035342 00:06:18.926 15:47:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.926 15:47:45 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.926 15:47:45 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1035342' 00:06:18.926 killing process with pid 1035342 00:06:18.926 15:47:45 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1035342 00:06:18.926 15:47:45 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1035342 00:06:19.185 00:06:19.185 real 0m1.802s 00:06:19.185 user 0m3.466s 00:06:19.185 sys 0m0.476s 00:06:19.185 15:47:46 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.185 15:47:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.185 ************************************ 00:06:19.185 END TEST spdkcli_tcp 00:06:19.185 ************************************ 00:06:19.185 15:47:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.185 15:47:46 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.185 15:47:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.185 15:47:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.185 15:47:46 -- common/autotest_common.sh@10 -- # set +x 00:06:19.444 ************************************ 00:06:19.444 START TEST dpdk_mem_utility 00:06:19.444 ************************************ 00:06:19.444 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.444 * Looking for test storage... 00:06:19.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:19.444 15:47:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:19.444 15:47:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1035673 00:06:19.444 15:47:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.444 15:47:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1035673 00:06:19.444 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1035673 ']' 00:06:19.444 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.444 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.444 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.444 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.444 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.444 [2024-07-15 15:47:46.236010] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:19.444 [2024-07-15 15:47:46.236102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035673 ] 00:06:19.444 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.444 [2024-07-15 15:47:46.293245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.701 [2024-07-15 15:47:46.398797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.962 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.962 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:19.962 15:47:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:19.962 15:47:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:19.962 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.962 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.962 { 00:06:19.962 "filename": "/tmp/spdk_mem_dump.txt" 00:06:19.962 } 00:06:19.962 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.962 15:47:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:19.962 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:19.962 1 heaps totaling size 814.000000 MiB 00:06:19.962 size: 814.000000 MiB heap id: 0 00:06:19.962 end heaps---------- 00:06:19.962 8 mempools totaling size 598.116089 MiB 00:06:19.962 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:19.962 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:19.962 size: 84.521057 MiB name: bdev_io_1035673 00:06:19.962 size: 51.011292 MiB name: evtpool_1035673 00:06:19.962 size: 50.003479 MiB name: msgpool_1035673 00:06:19.962 size: 21.763794 MiB name: PDU_Pool 00:06:19.962 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:19.962 size: 0.026123 MiB name: Session_Pool 00:06:19.962 end mempools------- 00:06:19.962 6 memzones totaling size 4.142822 MiB 00:06:19.962 size: 1.000366 MiB name: RG_ring_0_1035673 00:06:19.962 size: 1.000366 MiB name: RG_ring_1_1035673 00:06:19.962 size: 1.000366 MiB name: RG_ring_4_1035673 00:06:19.962 size: 1.000366 MiB name: RG_ring_5_1035673 00:06:19.962 size: 0.125366 MiB name: RG_ring_2_1035673 00:06:19.962 size: 0.015991 MiB name: RG_ring_3_1035673 00:06:19.962 end memzones------- 00:06:19.962 15:47:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:19.962 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:19.962 list of free elements. size: 12.519348 MiB 00:06:19.962 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:19.962 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:19.962 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:19.962 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:19.962 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:19.962 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:19.962 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:19.962 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:19.962 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:19.962 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:19.962 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:19.962 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:19.962 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:19.962 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:19.962 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:19.962 list of standard malloc elements. size: 199.218079 MiB 00:06:19.962 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:19.962 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:19.962 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:19.962 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:19.962 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:19.962 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:19.962 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:19.962 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:19.962 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:19.962 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:19.962 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:19.962 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:19.962 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:19.962 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:19.962 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:19.962 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:19.962 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:19.962 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:19.962 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:19.962 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:19.962 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:19.962 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:19.962 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:19.962 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:19.962 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:19.962 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:19.962 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:19.962 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:19.962 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:19.962 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:19.962 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:19.962 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:19.962 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:19.962 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:19.962 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:19.962 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:19.962 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:19.962 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:19.962 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:19.962 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:19.963 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:19.963 list of memzone associated elements. size: 602.262573 MiB 00:06:19.963 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:19.963 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:19.963 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:19.963 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:19.963 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:19.963 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1035673_0 00:06:19.963 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:19.963 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1035673_0 00:06:19.963 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:19.963 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1035673_0 00:06:19.963 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:19.963 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:19.963 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:19.963 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:19.963 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:19.963 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1035673 00:06:19.963 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:19.963 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1035673 00:06:19.963 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:19.963 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1035673 00:06:19.963 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:19.963 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:19.963 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:19.963 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:19.963 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:19.963 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:19.963 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:19.963 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:19.963 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:19.963 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1035673 00:06:19.963 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:19.963 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1035673 00:06:19.963 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:19.963 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1035673 00:06:19.963 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:19.963 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1035673 00:06:19.963 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:19.963 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1035673 00:06:19.963 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:19.963 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:19.963 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:19.963 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:19.963 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:19.963 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:19.963 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:19.963 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1035673 00:06:19.963 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:19.963 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:19.963 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:19.963 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:19.963 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:19.963 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1035673 00:06:19.963 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:19.963 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:19.963 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:19.963 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1035673 00:06:19.963 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:19.963 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1035673 00:06:19.963 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:19.963 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:19.963 15:47:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:19.963 15:47:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1035673 00:06:19.963 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1035673 ']' 00:06:19.963 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1035673 00:06:19.963 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:19.963 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.963 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1035673 00:06:19.963 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.963 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.963 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1035673' 00:06:19.963 killing process with pid 1035673 00:06:19.963 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1035673 00:06:19.963 15:47:46 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1035673 00:06:20.534 00:06:20.534 real 0m1.142s 00:06:20.534 user 0m1.098s 00:06:20.534 sys 0m0.416s 00:06:20.534 15:47:47 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.534 15:47:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.534 ************************************ 00:06:20.534 END TEST dpdk_mem_utility 00:06:20.534 ************************************ 00:06:20.534 15:47:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.534 15:47:47 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:20.534 15:47:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.534 15:47:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.534 15:47:47 -- common/autotest_common.sh@10 -- # set +x 00:06:20.534 ************************************ 00:06:20.534 START TEST event 00:06:20.534 ************************************ 00:06:20.534 15:47:47 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:20.534 * Looking for test storage... 00:06:20.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:20.534 15:47:47 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:20.534 15:47:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:20.534 15:47:47 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.534 15:47:47 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:20.534 15:47:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.534 15:47:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.534 ************************************ 00:06:20.534 START TEST event_perf 00:06:20.534 ************************************ 00:06:20.534 15:47:47 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.534 Running I/O for 1 seconds...[2024-07-15 15:47:47.410777] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:20.534 [2024-07-15 15:47:47.410845] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035861 ] 00:06:20.534 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.792 [2024-07-15 15:47:47.475022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.792 [2024-07-15 15:47:47.587828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.792 [2024-07-15 15:47:47.590896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.792 [2024-07-15 15:47:47.590968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.792 [2024-07-15 15:47:47.590971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.170 Running I/O for 1 seconds... 00:06:22.170 lcore 0: 224950 00:06:22.170 lcore 1: 224951 00:06:22.170 lcore 2: 224951 00:06:22.170 lcore 3: 224950 00:06:22.170 done. 00:06:22.170 00:06:22.170 real 0m1.321s 00:06:22.170 user 0m4.229s 00:06:22.170 sys 0m0.084s 00:06:22.170 15:47:48 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.170 15:47:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.170 ************************************ 00:06:22.170 END TEST event_perf 00:06:22.170 ************************************ 00:06:22.170 15:47:48 event -- common/autotest_common.sh@1142 -- # return 0 00:06:22.170 15:47:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:22.170 15:47:48 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:22.170 15:47:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.170 15:47:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.170 ************************************ 00:06:22.170 START TEST event_reactor 00:06:22.170 ************************************ 00:06:22.170 15:47:48 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:22.170 [2024-07-15 15:47:48.775901] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:22.170 [2024-07-15 15:47:48.775970] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036018 ] 00:06:22.170 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.170 [2024-07-15 15:47:48.836809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.170 [2024-07-15 15:47:48.953850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.551 test_start 00:06:23.551 oneshot 00:06:23.551 tick 100 00:06:23.551 tick 100 00:06:23.551 tick 250 00:06:23.551 tick 100 00:06:23.551 tick 100 00:06:23.551 tick 100 00:06:23.551 tick 250 00:06:23.551 tick 500 00:06:23.551 tick 100 00:06:23.551 tick 100 00:06:23.551 tick 250 00:06:23.551 tick 100 00:06:23.551 tick 100 00:06:23.551 test_end 00:06:23.551 00:06:23.551 real 0m1.308s 00:06:23.551 user 0m1.225s 00:06:23.551 sys 0m0.078s 00:06:23.551 15:47:50 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.551 15:47:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:23.551 ************************************ 00:06:23.551 END TEST event_reactor 00:06:23.551 ************************************ 00:06:23.551 15:47:50 event -- common/autotest_common.sh@1142 -- # return 0 00:06:23.551 15:47:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.551 15:47:50 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:23.551 15:47:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.551 15:47:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.551 ************************************ 00:06:23.551 START TEST event_reactor_perf 00:06:23.551 ************************************ 00:06:23.551 15:47:50 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.551 [2024-07-15 15:47:50.134037] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:23.551 [2024-07-15 15:47:50.134096] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036196 ] 00:06:23.551 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.551 [2024-07-15 15:47:50.198358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.551 [2024-07-15 15:47:50.316117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.933 test_start 00:06:24.933 test_end 00:06:24.933 Performance: 357095 events per second 00:06:24.933 00:06:24.933 real 0m1.318s 00:06:24.933 user 0m1.227s 00:06:24.933 sys 0m0.086s 00:06:24.933 15:47:51 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.933 15:47:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.933 ************************************ 00:06:24.933 END TEST event_reactor_perf 00:06:24.933 ************************************ 00:06:24.933 15:47:51 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.933 15:47:51 event -- event/event.sh@49 -- # uname -s 00:06:24.933 15:47:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:24.933 15:47:51 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:24.933 15:47:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.933 15:47:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.933 15:47:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.933 ************************************ 00:06:24.933 START TEST event_scheduler 00:06:24.933 ************************************ 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:24.934 * Looking for test storage... 00:06:24.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:24.934 15:47:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:24.934 15:47:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1036480 00:06:24.934 15:47:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:24.934 15:47:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.934 15:47:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1036480 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1036480 ']' 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.934 [2024-07-15 15:47:51.575525] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:24.934 [2024-07-15 15:47:51.575613] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036480 ] 00:06:24.934 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.934 [2024-07-15 15:47:51.633574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.934 [2024-07-15 15:47:51.742009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.934 [2024-07-15 15:47:51.742073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.934 [2024-07-15 15:47:51.742138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.934 [2024-07-15 15:47:51.742142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:24.934 15:47:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.934 [2024-07-15 15:47:51.782841] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:24.934 [2024-07-15 15:47:51.782890] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:24.934 [2024-07-15 15:47:51.782910] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:24.934 [2024-07-15 15:47:51.782921] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:24.934 [2024-07-15 15:47:51.782941] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.934 15:47:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.934 15:47:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.194 [2024-07-15 15:47:51.880806] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:25.194 15:47:51 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.194 15:47:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:25.194 15:47:51 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.194 15:47:51 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.194 15:47:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.194 ************************************ 00:06:25.194 START TEST scheduler_create_thread 00:06:25.195 ************************************ 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.195 2 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.195 3 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.195 4 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.195 5 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.195 6 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.195 7 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.195 8 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.195 9 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.195 10 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.195 15:47:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.195 15:47:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:25.195 15:47:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.195 15:47:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.195 15:47:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:25.195 15:47:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:25.195 15:47:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.195 15:47:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.576 15:47:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.576 00:06:26.576 real 0m1.172s 00:06:26.576 user 0m0.011s 00:06:26.576 sys 0m0.003s 00:06:26.576 15:47:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.576 15:47:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.576 ************************************ 00:06:26.576 END TEST scheduler_create_thread 00:06:26.576 ************************************ 00:06:26.576 15:47:53 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:26.576 15:47:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:26.576 15:47:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1036480 00:06:26.576 15:47:53 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1036480 ']' 00:06:26.576 15:47:53 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1036480 00:06:26.576 15:47:53 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:26.576 15:47:53 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.576 15:47:53 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1036480 00:06:26.576 15:47:53 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:26.576 15:47:53 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:26.576 15:47:53 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1036480' 00:06:26.576 killing process with pid 1036480 00:06:26.576 15:47:53 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1036480 00:06:26.576 15:47:53 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1036480 00:06:26.835 [2024-07-15 15:47:53.562935] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:27.093 00:06:27.093 real 0m2.334s 00:06:27.093 user 0m2.652s 00:06:27.093 sys 0m0.322s 00:06:27.093 15:47:53 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.093 15:47:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.093 ************************************ 00:06:27.093 END TEST event_scheduler 00:06:27.093 ************************************ 00:06:27.093 15:47:53 event -- common/autotest_common.sh@1142 -- # return 0 00:06:27.093 15:47:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:27.093 15:47:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:27.093 15:47:53 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.093 15:47:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.093 15:47:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.093 ************************************ 00:06:27.093 START TEST app_repeat 00:06:27.093 ************************************ 00:06:27.093 15:47:53 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1036802 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1036802' 00:06:27.093 Process app_repeat pid: 1036802 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:27.093 spdk_app_start Round 0 00:06:27.093 15:47:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1036802 /var/tmp/spdk-nbd.sock 00:06:27.093 15:47:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1036802 ']' 00:06:27.093 15:47:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.093 15:47:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.093 15:47:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.093 15:47:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.093 15:47:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.093 [2024-07-15 15:47:53.901656] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:27.093 [2024-07-15 15:47:53.901718] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036802 ] 00:06:27.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.093 [2024-07-15 15:47:53.963024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.351 [2024-07-15 15:47:54.079300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.351 [2024-07-15 15:47:54.079306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.351 15:47:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.351 15:47:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:27.351 15:47:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.610 Malloc0 00:06:27.610 15:47:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.868 Malloc1 00:06:27.868 15:47:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.868 15:47:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.126 /dev/nbd0 00:06:28.126 15:47:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.126 15:47:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.126 15:47:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:28.126 15:47:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:28.126 15:47:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:28.126 15:47:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:28.126 15:47:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:28.126 15:47:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:28.126 15:47:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:28.126 15:47:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:28.126 15:47:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.126 1+0 records in 00:06:28.126 1+0 records out 00:06:28.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164871 s, 24.8 MB/s 00:06:28.126 15:47:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.126 15:47:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:28.126 15:47:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.126 15:47:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:28.126 15:47:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:28.126 15:47:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.126 15:47:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.126 15:47:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.384 /dev/nbd1 00:06:28.384 15:47:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.384 15:47:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.384 1+0 records in 00:06:28.384 1+0 records out 00:06:28.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214693 s, 19.1 MB/s 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:28.384 15:47:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:28.384 15:47:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.384 15:47:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.384 15:47:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.384 15:47:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.384 15:47:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.642 15:47:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.642 { 00:06:28.642 "nbd_device": "/dev/nbd0", 00:06:28.642 "bdev_name": "Malloc0" 00:06:28.642 }, 00:06:28.642 { 00:06:28.642 "nbd_device": "/dev/nbd1", 00:06:28.642 "bdev_name": "Malloc1" 00:06:28.642 } 00:06:28.642 ]' 00:06:28.642 15:47:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.642 { 00:06:28.642 "nbd_device": "/dev/nbd0", 00:06:28.642 "bdev_name": "Malloc0" 00:06:28.642 }, 00:06:28.642 { 00:06:28.642 "nbd_device": "/dev/nbd1", 00:06:28.642 "bdev_name": "Malloc1" 00:06:28.642 } 00:06:28.642 ]' 00:06:28.642 15:47:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.642 15:47:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.642 /dev/nbd1' 00:06:28.642 15:47:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.642 /dev/nbd1' 00:06:28.642 15:47:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.900 256+0 records in 00:06:28.900 256+0 records out 00:06:28.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498495 s, 210 MB/s 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.900 256+0 records in 00:06:28.900 256+0 records out 00:06:28.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235624 s, 44.5 MB/s 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.900 256+0 records in 00:06:28.900 256+0 records out 00:06:28.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225257 s, 46.6 MB/s 00:06:28.900 15:47:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.901 15:47:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.160 15:47:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.160 15:47:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.160 15:47:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.160 15:47:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.160 15:47:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.160 15:47:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.160 15:47:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.160 15:47:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.160 15:47:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.160 15:47:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.466 15:47:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.466 15:47:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.466 15:47:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.466 15:47:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.466 15:47:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.466 15:47:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.466 15:47:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.466 15:47:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.466 15:47:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.466 15:47:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.466 15:47:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.724 15:47:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.724 15:47:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.724 15:47:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.724 15:47:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.724 15:47:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.724 15:47:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.724 15:47:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.724 15:47:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.724 15:47:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.724 15:47:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.724 15:47:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.724 15:47:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.724 15:47:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.983 15:47:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.241 [2024-07-15 15:47:57.033234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.241 [2024-07-15 15:47:57.148552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.241 [2024-07-15 15:47:57.148552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.500 [2024-07-15 15:47:57.210360] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.500 [2024-07-15 15:47:57.210442] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.026 15:47:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.026 15:47:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:33.026 spdk_app_start Round 1 00:06:33.026 15:47:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1036802 /var/tmp/spdk-nbd.sock 00:06:33.026 15:47:59 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1036802 ']' 00:06:33.026 15:47:59 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.026 15:47:59 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.026 15:47:59 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.026 15:47:59 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.026 15:47:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.283 15:48:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.283 15:48:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:33.283 15:48:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.540 Malloc0 00:06:33.540 15:48:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.798 Malloc1 00:06:33.798 15:48:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.798 15:48:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.056 /dev/nbd0 00:06:34.056 15:48:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.056 15:48:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.056 1+0 records in 00:06:34.056 1+0 records out 00:06:34.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160429 s, 25.5 MB/s 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:34.056 15:48:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:34.056 15:48:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.056 15:48:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.056 15:48:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.315 /dev/nbd1 00:06:34.315 15:48:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.315 15:48:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.315 1+0 records in 00:06:34.315 1+0 records out 00:06:34.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174669 s, 23.5 MB/s 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:34.315 15:48:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:34.315 15:48:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.315 15:48:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.315 15:48:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.315 15:48:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.315 15:48:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.573 { 00:06:34.573 "nbd_device": "/dev/nbd0", 00:06:34.573 "bdev_name": "Malloc0" 00:06:34.573 }, 00:06:34.573 { 00:06:34.573 "nbd_device": "/dev/nbd1", 00:06:34.573 "bdev_name": "Malloc1" 00:06:34.573 } 00:06:34.573 ]' 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.573 { 00:06:34.573 "nbd_device": "/dev/nbd0", 00:06:34.573 "bdev_name": "Malloc0" 00:06:34.573 }, 00:06:34.573 { 00:06:34.573 "nbd_device": "/dev/nbd1", 00:06:34.573 "bdev_name": "Malloc1" 00:06:34.573 } 00:06:34.573 ]' 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:34.573 /dev/nbd1' 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:34.573 /dev/nbd1' 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:34.573 256+0 records in 00:06:34.573 256+0 records out 00:06:34.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496974 s, 211 MB/s 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:34.573 256+0 records in 00:06:34.573 256+0 records out 00:06:34.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209491 s, 50.1 MB/s 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:34.573 256+0 records in 00:06:34.573 256+0 records out 00:06:34.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245032 s, 42.8 MB/s 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.573 15:48:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:34.831 15:48:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.831 15:48:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.088 15:48:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.088 15:48:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.088 15:48:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.088 15:48:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.088 15:48:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.088 15:48:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.088 15:48:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.088 15:48:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.088 15:48:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.088 15:48:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.346 15:48:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.346 15:48:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.346 15:48:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.346 15:48:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.346 15:48:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.346 15:48:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.346 15:48:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.346 15:48:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.346 15:48:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.346 15:48:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.346 15:48:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.604 15:48:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.604 15:48:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.604 15:48:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.604 15:48:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.604 15:48:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.604 15:48:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.604 15:48:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:35.604 15:48:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.604 15:48:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.604 15:48:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.604 15:48:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.604 15:48:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.604 15:48:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:35.862 15:48:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:36.120 [2024-07-15 15:48:02.930799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.120 [2024-07-15 15:48:03.046720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.120 [2024-07-15 15:48:03.046725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.378 [2024-07-15 15:48:03.110062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:36.378 [2024-07-15 15:48:03.110136] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.904 15:48:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.904 15:48:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:38.904 spdk_app_start Round 2 00:06:38.904 15:48:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1036802 /var/tmp/spdk-nbd.sock 00:06:38.904 15:48:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1036802 ']' 00:06:38.904 15:48:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.904 15:48:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.904 15:48:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.904 15:48:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.904 15:48:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.162 15:48:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.162 15:48:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:39.162 15:48:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.420 Malloc0 00:06:39.420 15:48:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.679 Malloc1 00:06:39.679 15:48:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.679 15:48:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.937 /dev/nbd0 00:06:39.937 15:48:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.937 15:48:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.937 1+0 records in 00:06:39.937 1+0 records out 00:06:39.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170601 s, 24.0 MB/s 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:39.937 15:48:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:39.937 15:48:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.937 15:48:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.937 15:48:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.194 /dev/nbd1 00:06:40.194 15:48:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.194 15:48:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.194 1+0 records in 00:06:40.194 1+0 records out 00:06:40.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215767 s, 19.0 MB/s 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:40.194 15:48:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:40.194 15:48:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.194 15:48:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.194 15:48:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.194 15:48:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.194 15:48:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.452 { 00:06:40.452 "nbd_device": "/dev/nbd0", 00:06:40.452 "bdev_name": "Malloc0" 00:06:40.452 }, 00:06:40.452 { 00:06:40.452 "nbd_device": "/dev/nbd1", 00:06:40.452 "bdev_name": "Malloc1" 00:06:40.452 } 00:06:40.452 ]' 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.452 { 00:06:40.452 "nbd_device": "/dev/nbd0", 00:06:40.452 "bdev_name": "Malloc0" 00:06:40.452 }, 00:06:40.452 { 00:06:40.452 "nbd_device": "/dev/nbd1", 00:06:40.452 "bdev_name": "Malloc1" 00:06:40.452 } 00:06:40.452 ]' 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.452 /dev/nbd1' 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.452 /dev/nbd1' 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.452 15:48:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.453 256+0 records in 00:06:40.453 256+0 records out 00:06:40.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00486294 s, 216 MB/s 00:06:40.453 15:48:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.453 15:48:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.711 256+0 records in 00:06:40.711 256+0 records out 00:06:40.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242979 s, 43.2 MB/s 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.711 256+0 records in 00:06:40.711 256+0 records out 00:06:40.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026021 s, 40.3 MB/s 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.711 15:48:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.969 15:48:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.969 15:48:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.969 15:48:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.969 15:48:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.969 15:48:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.969 15:48:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.969 15:48:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.969 15:48:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.969 15:48:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.969 15:48:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.227 15:48:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.227 15:48:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.227 15:48:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.227 15:48:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.227 15:48:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.227 15:48:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.227 15:48:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.227 15:48:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.227 15:48:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.227 15:48:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.227 15:48:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.485 15:48:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.485 15:48:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.485 15:48:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.485 15:48:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.485 15:48:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.485 15:48:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.485 15:48:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.485 15:48:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.485 15:48:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.485 15:48:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.485 15:48:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.485 15:48:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.485 15:48:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.742 15:48:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.000 [2024-07-15 15:48:08.854891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.257 [2024-07-15 15:48:08.969213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.257 [2024-07-15 15:48:08.969213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.257 [2024-07-15 15:48:09.027088] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.257 [2024-07-15 15:48:09.027152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.964 15:48:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1036802 /var/tmp/spdk-nbd.sock 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1036802 ']' 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:44.964 15:48:11 event.app_repeat -- event/event.sh@39 -- # killprocess 1036802 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1036802 ']' 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1036802 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1036802 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1036802' 00:06:44.964 killing process with pid 1036802 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1036802 00:06:44.964 15:48:11 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1036802 00:06:45.222 spdk_app_start is called in Round 0. 00:06:45.222 Shutdown signal received, stop current app iteration 00:06:45.222 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 reinitialization... 00:06:45.222 spdk_app_start is called in Round 1. 00:06:45.222 Shutdown signal received, stop current app iteration 00:06:45.222 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 reinitialization... 00:06:45.222 spdk_app_start is called in Round 2. 00:06:45.222 Shutdown signal received, stop current app iteration 00:06:45.222 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 reinitialization... 00:06:45.222 spdk_app_start is called in Round 3. 00:06:45.222 Shutdown signal received, stop current app iteration 00:06:45.222 15:48:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:45.222 15:48:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:45.222 00:06:45.222 real 0m18.235s 00:06:45.222 user 0m39.486s 00:06:45.222 sys 0m3.310s 00:06:45.222 15:48:12 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.222 15:48:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.222 ************************************ 00:06:45.222 END TEST app_repeat 00:06:45.222 ************************************ 00:06:45.222 15:48:12 event -- common/autotest_common.sh@1142 -- # return 0 00:06:45.222 15:48:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:45.222 15:48:12 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:45.222 15:48:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.222 15:48:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.222 15:48:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.480 ************************************ 00:06:45.480 START TEST cpu_locks 00:06:45.480 ************************************ 00:06:45.480 15:48:12 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:45.480 * Looking for test storage... 00:06:45.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:45.480 15:48:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:45.480 15:48:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:45.480 15:48:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:45.480 15:48:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:45.480 15:48:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.480 15:48:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.480 15:48:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.480 ************************************ 00:06:45.480 START TEST default_locks 00:06:45.480 ************************************ 00:06:45.480 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:45.480 15:48:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1039774 00:06:45.480 15:48:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.480 15:48:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1039774 00:06:45.480 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1039774 ']' 00:06:45.480 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.480 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.480 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.480 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.480 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.480 [2024-07-15 15:48:12.278720] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:45.480 [2024-07-15 15:48:12.278803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039774 ] 00:06:45.480 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.480 [2024-07-15 15:48:12.334541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.738 [2024-07-15 15:48:12.442168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.995 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.995 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:45.995 15:48:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1039774 00:06:45.995 15:48:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1039774 00:06:45.995 15:48:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.253 lslocks: write error 00:06:46.253 15:48:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1039774 00:06:46.253 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1039774 ']' 00:06:46.253 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1039774 00:06:46.253 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:46.253 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.253 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1039774 00:06:46.253 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.253 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.253 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1039774' 00:06:46.253 killing process with pid 1039774 00:06:46.253 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1039774 00:06:46.253 15:48:12 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1039774 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1039774 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1039774 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1039774 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1039774 ']' 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1039774) - No such process 00:06:46.526 ERROR: process (pid: 1039774) is no longer running 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:46.526 00:06:46.526 real 0m1.214s 00:06:46.526 user 0m1.120s 00:06:46.526 sys 0m0.531s 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.526 15:48:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.526 ************************************ 00:06:46.526 END TEST default_locks 00:06:46.526 ************************************ 00:06:46.831 15:48:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:46.831 15:48:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:46.831 15:48:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.831 15:48:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.831 15:48:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.831 ************************************ 00:06:46.831 START TEST default_locks_via_rpc 00:06:46.831 ************************************ 00:06:46.831 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:46.831 15:48:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1039939 00:06:46.831 15:48:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.831 15:48:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1039939 00:06:46.831 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1039939 ']' 00:06:46.831 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.831 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.831 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.831 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.831 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.831 [2024-07-15 15:48:13.545628] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:46.831 [2024-07-15 15:48:13.545720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039939 ] 00:06:46.831 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.831 [2024-07-15 15:48:13.603075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.831 [2024-07-15 15:48:13.712921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.089 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.089 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:47.089 15:48:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:47.089 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.089 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.090 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.090 15:48:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:47.090 15:48:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:47.090 15:48:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:47.090 15:48:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:47.090 15:48:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:47.090 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.090 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.090 15:48:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.090 15:48:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1039939 00:06:47.090 15:48:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1039939 00:06:47.090 15:48:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.654 15:48:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1039939 00:06:47.654 15:48:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1039939 ']' 00:06:47.654 15:48:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1039939 00:06:47.654 15:48:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:47.654 15:48:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.654 15:48:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1039939 00:06:47.654 15:48:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.654 15:48:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.654 15:48:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1039939' 00:06:47.654 killing process with pid 1039939 00:06:47.654 15:48:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1039939 00:06:47.654 15:48:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1039939 00:06:47.913 00:06:47.913 real 0m1.296s 00:06:47.913 user 0m1.239s 00:06:47.913 sys 0m0.498s 00:06:47.913 15:48:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.913 15:48:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.913 ************************************ 00:06:47.913 END TEST default_locks_via_rpc 00:06:47.913 ************************************ 00:06:47.913 15:48:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:47.913 15:48:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:47.913 15:48:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.913 15:48:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.913 15:48:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.913 ************************************ 00:06:47.913 START TEST non_locking_app_on_locked_coremask 00:06:47.913 ************************************ 00:06:47.913 15:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:47.913 15:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1040112 00:06:47.913 15:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.913 15:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1040112 /var/tmp/spdk.sock 00:06:47.913 15:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1040112 ']' 00:06:47.913 15:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.913 15:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.913 15:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.913 15:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.913 15:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.172 [2024-07-15 15:48:14.884118] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:48.172 [2024-07-15 15:48:14.884218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040112 ] 00:06:48.172 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.172 [2024-07-15 15:48:14.941455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.172 [2024-07-15 15:48:15.049905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.430 15:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.430 15:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:48.430 15:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1040230 00:06:48.430 15:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:48.430 15:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1040230 /var/tmp/spdk2.sock 00:06:48.430 15:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1040230 ']' 00:06:48.430 15:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.430 15:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.430 15:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.430 15:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.430 15:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.688 [2024-07-15 15:48:15.365021] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:48.688 [2024-07-15 15:48:15.365097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040230 ] 00:06:48.688 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.688 [2024-07-15 15:48:15.458565] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.688 [2024-07-15 15:48:15.458599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.969 [2024-07-15 15:48:15.693713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.534 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.534 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:49.534 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1040112 00:06:49.535 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1040112 00:06:49.535 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.100 lslocks: write error 00:06:50.100 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1040112 00:06:50.100 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1040112 ']' 00:06:50.100 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1040112 00:06:50.100 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:50.100 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.100 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1040112 00:06:50.100 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.100 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.100 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1040112' 00:06:50.100 killing process with pid 1040112 00:06:50.100 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1040112 00:06:50.100 15:48:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1040112 00:06:51.034 15:48:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1040230 00:06:51.034 15:48:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1040230 ']' 00:06:51.034 15:48:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1040230 00:06:51.034 15:48:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:51.034 15:48:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.034 15:48:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1040230 00:06:51.034 15:48:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.034 15:48:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.034 15:48:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1040230' 00:06:51.034 killing process with pid 1040230 00:06:51.034 15:48:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1040230 00:06:51.034 15:48:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1040230 00:06:51.601 00:06:51.601 real 0m3.437s 00:06:51.601 user 0m3.574s 00:06:51.601 sys 0m1.077s 00:06:51.601 15:48:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.601 15:48:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.601 ************************************ 00:06:51.601 END TEST non_locking_app_on_locked_coremask 00:06:51.601 ************************************ 00:06:51.601 15:48:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:51.601 15:48:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:51.601 15:48:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.601 15:48:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.601 15:48:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.601 ************************************ 00:06:51.601 START TEST locking_app_on_unlocked_coremask 00:06:51.601 ************************************ 00:06:51.601 15:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:51.601 15:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1040625 00:06:51.601 15:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:51.601 15:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1040625 /var/tmp/spdk.sock 00:06:51.601 15:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1040625 ']' 00:06:51.601 15:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.601 15:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.601 15:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.601 15:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.601 15:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.601 [2024-07-15 15:48:18.371337] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:51.601 [2024-07-15 15:48:18.371431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040625 ] 00:06:51.601 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.601 [2024-07-15 15:48:18.433848] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.601 [2024-07-15 15:48:18.433893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.859 [2024-07-15 15:48:18.550232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.424 15:48:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.424 15:48:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:52.424 15:48:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1040679 00:06:52.424 15:48:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.424 15:48:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1040679 /var/tmp/spdk2.sock 00:06:52.424 15:48:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1040679 ']' 00:06:52.424 15:48:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.424 15:48:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.424 15:48:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.424 15:48:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.424 15:48:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.424 [2024-07-15 15:48:19.353105] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:52.424 [2024-07-15 15:48:19.353200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040679 ] 00:06:52.682 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.682 [2024-07-15 15:48:19.453434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.940 [2024-07-15 15:48:19.687435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.505 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.505 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:53.505 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1040679 00:06:53.505 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1040679 00:06:53.505 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.071 lslocks: write error 00:06:54.071 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1040625 00:06:54.071 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1040625 ']' 00:06:54.071 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1040625 00:06:54.071 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:54.071 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.071 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1040625 00:06:54.071 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.071 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.071 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1040625' 00:06:54.071 killing process with pid 1040625 00:06:54.071 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1040625 00:06:54.071 15:48:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1040625 00:06:55.006 15:48:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1040679 00:06:55.006 15:48:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1040679 ']' 00:06:55.006 15:48:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1040679 00:06:55.006 15:48:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:55.006 15:48:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.006 15:48:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1040679 00:06:55.006 15:48:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.006 15:48:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.006 15:48:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1040679' 00:06:55.006 killing process with pid 1040679 00:06:55.006 15:48:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1040679 00:06:55.006 15:48:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1040679 00:06:55.572 00:06:55.572 real 0m3.964s 00:06:55.572 user 0m4.310s 00:06:55.572 sys 0m1.113s 00:06:55.572 15:48:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.572 15:48:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.572 ************************************ 00:06:55.572 END TEST locking_app_on_unlocked_coremask 00:06:55.572 ************************************ 00:06:55.572 15:48:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.572 15:48:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:55.572 15:48:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.572 15:48:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.572 15:48:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.572 ************************************ 00:06:55.572 START TEST locking_app_on_locked_coremask 00:06:55.572 ************************************ 00:06:55.572 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:55.572 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1041105 00:06:55.572 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.573 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1041105 /var/tmp/spdk.sock 00:06:55.573 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1041105 ']' 00:06:55.573 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.573 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.573 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.573 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.573 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.573 [2024-07-15 15:48:22.390596] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:55.573 [2024-07-15 15:48:22.390695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041105 ] 00:06:55.573 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.573 [2024-07-15 15:48:22.454078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.831 [2024-07-15 15:48:22.568198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1041187 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1041187 /var/tmp/spdk2.sock 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1041187 /var/tmp/spdk2.sock 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1041187 /var/tmp/spdk2.sock 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1041187 ']' 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.090 15:48:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.090 [2024-07-15 15:48:22.884911] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:56.090 [2024-07-15 15:48:22.885011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041187 ] 00:06:56.090 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.090 [2024-07-15 15:48:22.968140] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1041105 has claimed it. 00:06:56.090 [2024-07-15 15:48:22.968219] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:56.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1041187) - No such process 00:06:56.655 ERROR: process (pid: 1041187) is no longer running 00:06:56.655 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.655 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:56.655 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:56.655 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.655 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.655 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.655 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1041105 00:06:56.655 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1041105 00:06:56.655 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.219 lslocks: write error 00:06:57.219 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1041105 00:06:57.219 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1041105 ']' 00:06:57.219 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1041105 00:06:57.219 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:57.219 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.219 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1041105 00:06:57.219 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.219 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.219 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1041105' 00:06:57.219 killing process with pid 1041105 00:06:57.219 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1041105 00:06:57.219 15:48:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1041105 00:06:57.476 00:06:57.476 real 0m2.021s 00:06:57.476 user 0m2.177s 00:06:57.476 sys 0m0.633s 00:06:57.476 15:48:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.476 15:48:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.476 ************************************ 00:06:57.476 END TEST locking_app_on_locked_coremask 00:06:57.476 ************************************ 00:06:57.476 15:48:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:57.476 15:48:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:57.476 15:48:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.476 15:48:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.476 15:48:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.734 ************************************ 00:06:57.734 START TEST locking_overlapped_coremask 00:06:57.734 ************************************ 00:06:57.734 15:48:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:57.734 15:48:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1041401 00:06:57.734 15:48:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:57.734 15:48:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1041401 /var/tmp/spdk.sock 00:06:57.734 15:48:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1041401 ']' 00:06:57.734 15:48:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.734 15:48:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.734 15:48:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.734 15:48:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.734 15:48:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.734 [2024-07-15 15:48:24.461473] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:57.734 [2024-07-15 15:48:24.461575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041401 ] 00:06:57.734 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.734 [2024-07-15 15:48:24.523234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.734 [2024-07-15 15:48:24.638539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.734 [2024-07-15 15:48:24.638605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.734 [2024-07-15 15:48:24.638608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1041541 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1041541 /var/tmp/spdk2.sock 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1041541 /var/tmp/spdk2.sock 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1041541 /var/tmp/spdk2.sock 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1041541 ']' 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.665 15:48:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.665 [2024-07-15 15:48:25.442112] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:58.665 [2024-07-15 15:48:25.442220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041541 ] 00:06:58.665 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.665 [2024-07-15 15:48:25.528121] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1041401 has claimed it. 00:06:58.665 [2024-07-15 15:48:25.528200] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1041541) - No such process 00:06:59.230 ERROR: process (pid: 1041541) is no longer running 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1041401 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1041401 ']' 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1041401 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.230 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1041401 00:06:59.488 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.488 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.488 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1041401' 00:06:59.488 killing process with pid 1041401 00:06:59.488 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1041401 00:06:59.488 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1041401 00:06:59.747 00:06:59.747 real 0m2.193s 00:06:59.747 user 0m6.118s 00:06:59.747 sys 0m0.502s 00:06:59.747 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.747 15:48:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.747 ************************************ 00:06:59.747 END TEST locking_overlapped_coremask 00:06:59.747 ************************************ 00:06:59.747 15:48:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:59.747 15:48:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:59.747 15:48:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.747 15:48:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.747 15:48:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.747 ************************************ 00:06:59.747 START TEST locking_overlapped_coremask_via_rpc 00:06:59.747 ************************************ 00:06:59.747 15:48:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:59.747 15:48:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1041703 00:06:59.747 15:48:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:59.747 15:48:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1041703 /var/tmp/spdk.sock 00:06:59.747 15:48:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1041703 ']' 00:06:59.747 15:48:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.747 15:48:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.747 15:48:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.747 15:48:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.747 15:48:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.005 [2024-07-15 15:48:26.709665] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:00.005 [2024-07-15 15:48:26.709765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041703 ] 00:07:00.005 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.005 [2024-07-15 15:48:26.773769] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.005 [2024-07-15 15:48:26.773807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.005 [2024-07-15 15:48:26.889045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.005 [2024-07-15 15:48:26.889102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.005 [2024-07-15 15:48:26.889121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.937 15:48:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.937 15:48:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:00.937 15:48:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1041841 00:07:00.937 15:48:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:00.937 15:48:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1041841 /var/tmp/spdk2.sock 00:07:00.937 15:48:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1041841 ']' 00:07:00.937 15:48:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.937 15:48:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.937 15:48:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.937 15:48:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.937 15:48:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.937 [2024-07-15 15:48:27.681349] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:00.937 [2024-07-15 15:48:27.681445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041841 ] 00:07:00.937 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.937 [2024-07-15 15:48:27.769356] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.937 [2024-07-15 15:48:27.769398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.195 [2024-07-15 15:48:27.988848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.195 [2024-07-15 15:48:27.991979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:01.195 [2024-07-15 15:48:27.991981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.791 [2024-07-15 15:48:28.634988] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1041703 has claimed it. 00:07:01.791 request: 00:07:01.791 { 00:07:01.791 "method": "framework_enable_cpumask_locks", 00:07:01.791 "req_id": 1 00:07:01.791 } 00:07:01.791 Got JSON-RPC error response 00:07:01.791 response: 00:07:01.791 { 00:07:01.791 "code": -32603, 00:07:01.791 "message": "Failed to claim CPU core: 2" 00:07:01.791 } 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1041703 /var/tmp/spdk.sock 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1041703 ']' 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.791 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.048 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.048 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:02.048 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1041841 /var/tmp/spdk2.sock 00:07:02.048 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1041841 ']' 00:07:02.048 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.048 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.048 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.048 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.048 15:48:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.306 15:48:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.306 15:48:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:02.306 15:48:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:02.306 15:48:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.306 15:48:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.306 15:48:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.306 00:07:02.306 real 0m2.479s 00:07:02.306 user 0m1.188s 00:07:02.306 sys 0m0.230s 00:07:02.306 15:48:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.306 15:48:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.306 ************************************ 00:07:02.306 END TEST locking_overlapped_coremask_via_rpc 00:07:02.306 ************************************ 00:07:02.306 15:48:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:02.306 15:48:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:02.306 15:48:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1041703 ]] 00:07:02.306 15:48:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1041703 00:07:02.306 15:48:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1041703 ']' 00:07:02.306 15:48:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1041703 00:07:02.306 15:48:29 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:02.306 15:48:29 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.306 15:48:29 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1041703 00:07:02.306 15:48:29 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.306 15:48:29 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.306 15:48:29 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1041703' 00:07:02.306 killing process with pid 1041703 00:07:02.306 15:48:29 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1041703 00:07:02.306 15:48:29 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1041703 00:07:02.868 15:48:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1041841 ]] 00:07:02.868 15:48:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1041841 00:07:02.868 15:48:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1041841 ']' 00:07:02.868 15:48:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1041841 00:07:02.868 15:48:29 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:02.868 15:48:29 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.868 15:48:29 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1041841 00:07:02.868 15:48:29 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:02.868 15:48:29 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:02.868 15:48:29 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1041841' 00:07:02.868 killing process with pid 1041841 00:07:02.868 15:48:29 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1041841 00:07:02.868 15:48:29 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1041841 00:07:03.434 15:48:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:03.434 15:48:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:03.434 15:48:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1041703 ]] 00:07:03.434 15:48:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1041703 00:07:03.434 15:48:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1041703 ']' 00:07:03.434 15:48:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1041703 00:07:03.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1041703) - No such process 00:07:03.434 15:48:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1041703 is not found' 00:07:03.434 Process with pid 1041703 is not found 00:07:03.434 15:48:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1041841 ]] 00:07:03.434 15:48:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1041841 00:07:03.434 15:48:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1041841 ']' 00:07:03.434 15:48:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1041841 00:07:03.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1041841) - No such process 00:07:03.434 15:48:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1041841 is not found' 00:07:03.434 Process with pid 1041841 is not found 00:07:03.434 15:48:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:03.434 00:07:03.434 real 0m17.953s 00:07:03.434 user 0m32.078s 00:07:03.434 sys 0m5.482s 00:07:03.434 15:48:30 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.434 15:48:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.434 ************************************ 00:07:03.434 END TEST cpu_locks 00:07:03.434 ************************************ 00:07:03.434 15:48:30 event -- common/autotest_common.sh@1142 -- # return 0 00:07:03.434 00:07:03.434 real 0m42.814s 00:07:03.434 user 1m21.045s 00:07:03.434 sys 0m9.582s 00:07:03.434 15:48:30 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.434 15:48:30 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.434 ************************************ 00:07:03.434 END TEST event 00:07:03.434 ************************************ 00:07:03.434 15:48:30 -- common/autotest_common.sh@1142 -- # return 0 00:07:03.434 15:48:30 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:03.434 15:48:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.434 15:48:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.434 15:48:30 -- common/autotest_common.sh@10 -- # set +x 00:07:03.434 ************************************ 00:07:03.434 START TEST thread 00:07:03.434 ************************************ 00:07:03.434 15:48:30 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:03.434 * Looking for test storage... 00:07:03.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:03.434 15:48:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:03.434 15:48:30 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:03.434 15:48:30 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.434 15:48:30 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.434 ************************************ 00:07:03.434 START TEST thread_poller_perf 00:07:03.434 ************************************ 00:07:03.434 15:48:30 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:03.434 [2024-07-15 15:48:30.271122] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:03.434 [2024-07-15 15:48:30.271208] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042212 ] 00:07:03.434 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.434 [2024-07-15 15:48:30.333239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.691 [2024-07-15 15:48:30.448666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.691 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:04.667 ====================================== 00:07:04.667 busy:2711633259 (cyc) 00:07:04.667 total_run_count: 291000 00:07:04.667 tsc_hz: 2700000000 (cyc) 00:07:04.667 ====================================== 00:07:04.667 poller_cost: 9318 (cyc), 3451 (nsec) 00:07:04.667 00:07:04.667 real 0m1.324s 00:07:04.667 user 0m1.233s 00:07:04.667 sys 0m0.085s 00:07:04.667 15:48:31 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.667 15:48:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.667 ************************************ 00:07:04.667 END TEST thread_poller_perf 00:07:04.667 ************************************ 00:07:04.924 15:48:31 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:04.924 15:48:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.924 15:48:31 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:04.924 15:48:31 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.924 15:48:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.924 ************************************ 00:07:04.924 START TEST thread_poller_perf 00:07:04.924 ************************************ 00:07:04.924 15:48:31 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.924 [2024-07-15 15:48:31.646673] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:04.924 [2024-07-15 15:48:31.646743] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042369 ] 00:07:04.924 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.924 [2024-07-15 15:48:31.711662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.924 [2024-07-15 15:48:31.833982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.924 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:06.292 ====================================== 00:07:06.292 busy:2702665579 (cyc) 00:07:06.292 total_run_count: 3869000 00:07:06.292 tsc_hz: 2700000000 (cyc) 00:07:06.292 ====================================== 00:07:06.292 poller_cost: 698 (cyc), 258 (nsec) 00:07:06.292 00:07:06.292 real 0m1.325s 00:07:06.292 user 0m1.231s 00:07:06.292 sys 0m0.087s 00:07:06.292 15:48:32 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.292 15:48:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.292 ************************************ 00:07:06.292 END TEST thread_poller_perf 00:07:06.292 ************************************ 00:07:06.292 15:48:32 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:06.292 15:48:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:06.292 00:07:06.292 real 0m2.798s 00:07:06.292 user 0m2.519s 00:07:06.292 sys 0m0.278s 00:07:06.292 15:48:32 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.292 15:48:32 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.292 ************************************ 00:07:06.292 END TEST thread 00:07:06.292 ************************************ 00:07:06.292 15:48:32 -- common/autotest_common.sh@1142 -- # return 0 00:07:06.292 15:48:32 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:06.292 15:48:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.292 15:48:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.292 15:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:06.292 ************************************ 00:07:06.292 START TEST accel 00:07:06.292 ************************************ 00:07:06.292 15:48:33 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:06.292 * Looking for test storage... 00:07:06.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:06.292 15:48:33 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:06.292 15:48:33 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:06.292 15:48:33 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:06.292 15:48:33 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1042682 00:07:06.292 15:48:33 accel -- accel/accel.sh@63 -- # waitforlisten 1042682 00:07:06.292 15:48:33 accel -- common/autotest_common.sh@829 -- # '[' -z 1042682 ']' 00:07:06.292 15:48:33 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.292 15:48:33 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:06.292 15:48:33 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.292 15:48:33 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:06.292 15:48:33 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.292 15:48:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.292 15:48:33 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.292 15:48:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.292 15:48:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.292 15:48:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.292 15:48:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.292 15:48:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.292 15:48:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:06.292 15:48:33 accel -- accel/accel.sh@41 -- # jq -r . 00:07:06.292 [2024-07-15 15:48:33.127467] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:06.292 [2024-07-15 15:48:33.127566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042682 ] 00:07:06.293 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.293 [2024-07-15 15:48:33.185438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.549 [2024-07-15 15:48:33.292114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.806 15:48:33 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.806 15:48:33 accel -- common/autotest_common.sh@862 -- # return 0 00:07:06.806 15:48:33 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:06.806 15:48:33 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:06.806 15:48:33 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:06.806 15:48:33 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:06.806 15:48:33 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:06.806 15:48:33 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:06.806 15:48:33 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.806 15:48:33 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:06.806 15:48:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.806 15:48:33 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.806 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.806 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.806 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.806 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.806 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.806 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.806 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.806 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.806 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.806 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.806 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.806 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.806 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.806 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.807 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.807 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.807 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.807 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.807 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.807 15:48:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.807 15:48:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.807 15:48:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.807 15:48:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.807 15:48:33 accel -- accel/accel.sh@75 -- # killprocess 1042682 00:07:06.807 15:48:33 accel -- common/autotest_common.sh@948 -- # '[' -z 1042682 ']' 00:07:06.807 15:48:33 accel -- common/autotest_common.sh@952 -- # kill -0 1042682 00:07:06.807 15:48:33 accel -- common/autotest_common.sh@953 -- # uname 00:07:06.807 15:48:33 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.807 15:48:33 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1042682 00:07:06.807 15:48:33 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.807 15:48:33 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.807 15:48:33 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1042682' 00:07:06.807 killing process with pid 1042682 00:07:06.807 15:48:33 accel -- common/autotest_common.sh@967 -- # kill 1042682 00:07:06.807 15:48:33 accel -- common/autotest_common.sh@972 -- # wait 1042682 00:07:07.372 15:48:34 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:07.372 15:48:34 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:07.372 15:48:34 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:07.372 15:48:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.372 15:48:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.372 15:48:34 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:07.372 15:48:34 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:07.372 15:48:34 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:07.372 15:48:34 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.372 15:48:34 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.372 15:48:34 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.372 15:48:34 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.372 15:48:34 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.372 15:48:34 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:07.372 15:48:34 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:07.372 15:48:34 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.372 15:48:34 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:07.372 15:48:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.372 15:48:34 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:07.372 15:48:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.372 15:48:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.372 15:48:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.372 ************************************ 00:07:07.372 START TEST accel_missing_filename 00:07:07.372 ************************************ 00:07:07.372 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:07.372 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:07.373 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:07.373 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:07.373 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.373 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:07.373 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.373 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:07.373 15:48:34 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:07.373 15:48:34 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:07.373 15:48:34 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.373 15:48:34 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.373 15:48:34 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.373 15:48:34 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.373 15:48:34 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.373 15:48:34 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:07.373 15:48:34 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:07.373 [2024-07-15 15:48:34.197691] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:07.373 [2024-07-15 15:48:34.197756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042848 ] 00:07:07.373 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.373 [2024-07-15 15:48:34.263220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.631 [2024-07-15 15:48:34.380910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.631 [2024-07-15 15:48:34.442585] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.631 [2024-07-15 15:48:34.531330] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:07.889 A filename is required. 00:07:07.889 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:07.889 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.889 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:07.889 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.889 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:07.889 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.889 00:07:07.889 real 0m0.477s 00:07:07.889 user 0m0.371s 00:07:07.889 sys 0m0.141s 00:07:07.889 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.889 15:48:34 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:07.889 ************************************ 00:07:07.889 END TEST accel_missing_filename 00:07:07.889 ************************************ 00:07:07.889 15:48:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.889 15:48:34 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:07.889 15:48:34 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:07.889 15:48:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.889 15:48:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.889 ************************************ 00:07:07.889 START TEST accel_compress_verify 00:07:07.889 ************************************ 00:07:07.889 15:48:34 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:07.889 15:48:34 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:07.889 15:48:34 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:07.889 15:48:34 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:07.889 15:48:34 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.889 15:48:34 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:07.889 15:48:34 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.889 15:48:34 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:07.889 15:48:34 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:07.889 15:48:34 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:07.889 15:48:34 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.889 15:48:34 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.889 15:48:34 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.889 15:48:34 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.889 15:48:34 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.889 15:48:34 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:07.889 15:48:34 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:07.889 [2024-07-15 15:48:34.719496] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:07.889 [2024-07-15 15:48:34.719561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042880 ] 00:07:07.889 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.889 [2024-07-15 15:48:34.782806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.147 [2024-07-15 15:48:34.901609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.147 [2024-07-15 15:48:34.963175] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.147 [2024-07-15 15:48:35.051575] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:08.406 00:07:08.406 Compression does not support the verify option, aborting. 00:07:08.406 15:48:35 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:08.406 15:48:35 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.406 15:48:35 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:08.406 15:48:35 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:08.406 15:48:35 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:08.406 15:48:35 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.406 00:07:08.406 real 0m0.472s 00:07:08.406 user 0m0.363s 00:07:08.406 sys 0m0.141s 00:07:08.406 15:48:35 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.406 15:48:35 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:08.406 ************************************ 00:07:08.406 END TEST accel_compress_verify 00:07:08.406 ************************************ 00:07:08.406 15:48:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.406 15:48:35 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:08.406 15:48:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:08.406 15:48:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.406 15:48:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.406 ************************************ 00:07:08.406 START TEST accel_wrong_workload 00:07:08.406 ************************************ 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:08.406 15:48:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:08.406 15:48:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:08.406 15:48:35 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.406 15:48:35 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.406 15:48:35 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.406 15:48:35 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.406 15:48:35 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.406 15:48:35 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:08.406 15:48:35 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:08.406 Unsupported workload type: foobar 00:07:08.406 [2024-07-15 15:48:35.236570] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:08.406 accel_perf options: 00:07:08.406 [-h help message] 00:07:08.406 [-q queue depth per core] 00:07:08.406 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:08.406 [-T number of threads per core 00:07:08.406 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:08.406 [-t time in seconds] 00:07:08.406 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:08.406 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:08.406 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:08.406 [-l for compress/decompress workloads, name of uncompressed input file 00:07:08.406 [-S for crc32c workload, use this seed value (default 0) 00:07:08.406 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:08.406 [-f for fill workload, use this BYTE value (default 255) 00:07:08.406 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:08.406 [-y verify result if this switch is on] 00:07:08.406 [-a tasks to allocate per core (default: same value as -q)] 00:07:08.406 Can be used to spread operations across a wider range of memory. 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.406 00:07:08.406 real 0m0.022s 00:07:08.406 user 0m0.014s 00:07:08.406 sys 0m0.009s 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.406 15:48:35 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:08.406 ************************************ 00:07:08.406 END TEST accel_wrong_workload 00:07:08.406 ************************************ 00:07:08.406 Error: writing output failed: Broken pipe 00:07:08.406 15:48:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.406 15:48:35 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:08.406 15:48:35 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:08.407 15:48:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.407 15:48:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.407 ************************************ 00:07:08.407 START TEST accel_negative_buffers 00:07:08.407 ************************************ 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:08.407 15:48:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:08.407 15:48:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:08.407 15:48:35 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.407 15:48:35 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.407 15:48:35 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.407 15:48:35 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.407 15:48:35 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.407 15:48:35 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:08.407 15:48:35 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:08.407 -x option must be non-negative. 00:07:08.407 [2024-07-15 15:48:35.306729] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:08.407 accel_perf options: 00:07:08.407 [-h help message] 00:07:08.407 [-q queue depth per core] 00:07:08.407 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:08.407 [-T number of threads per core 00:07:08.407 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:08.407 [-t time in seconds] 00:07:08.407 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:08.407 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:08.407 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:08.407 [-l for compress/decompress workloads, name of uncompressed input file 00:07:08.407 [-S for crc32c workload, use this seed value (default 0) 00:07:08.407 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:08.407 [-f for fill workload, use this BYTE value (default 255) 00:07:08.407 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:08.407 [-y verify result if this switch is on] 00:07:08.407 [-a tasks to allocate per core (default: same value as -q)] 00:07:08.407 Can be used to spread operations across a wider range of memory. 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.407 00:07:08.407 real 0m0.025s 00:07:08.407 user 0m0.012s 00:07:08.407 sys 0m0.012s 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.407 15:48:35 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:08.407 ************************************ 00:07:08.407 END TEST accel_negative_buffers 00:07:08.407 ************************************ 00:07:08.407 Error: writing output failed: Broken pipe 00:07:08.407 15:48:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.407 15:48:35 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:08.407 15:48:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:08.407 15:48:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.407 15:48:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.665 ************************************ 00:07:08.665 START TEST accel_crc32c 00:07:08.665 ************************************ 00:07:08.665 15:48:35 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:08.665 15:48:35 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:08.665 [2024-07-15 15:48:35.369885] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:08.665 [2024-07-15 15:48:35.369965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043065 ] 00:07:08.665 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.665 [2024-07-15 15:48:35.433979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.665 [2024-07-15 15:48:35.549903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.923 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.924 15:48:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:10.296 15:48:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.296 00:07:10.296 real 0m1.469s 00:07:10.296 user 0m1.326s 00:07:10.296 sys 0m0.145s 00:07:10.296 15:48:36 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.296 15:48:36 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:10.296 ************************************ 00:07:10.296 END TEST accel_crc32c 00:07:10.296 ************************************ 00:07:10.296 15:48:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.297 15:48:36 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:10.297 15:48:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:10.297 15:48:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.297 15:48:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.297 ************************************ 00:07:10.297 START TEST accel_crc32c_C2 00:07:10.297 ************************************ 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:10.297 15:48:36 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:10.297 [2024-07-15 15:48:36.881685] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:10.297 [2024-07-15 15:48:36.881739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043223 ] 00:07:10.297 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.297 [2024-07-15 15:48:36.942150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.297 [2024-07-15 15:48:37.060438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.297 15:48:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.670 00:07:11.670 real 0m1.470s 00:07:11.670 user 0m1.332s 00:07:11.670 sys 0m0.140s 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.670 15:48:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:11.670 ************************************ 00:07:11.670 END TEST accel_crc32c_C2 00:07:11.670 ************************************ 00:07:11.670 15:48:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.670 15:48:38 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:11.670 15:48:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:11.670 15:48:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.670 15:48:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.670 ************************************ 00:07:11.670 START TEST accel_copy 00:07:11.670 ************************************ 00:07:11.670 15:48:38 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:11.670 15:48:38 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:11.670 [2024-07-15 15:48:38.401669] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:11.670 [2024-07-15 15:48:38.401733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043380 ] 00:07:11.670 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.670 [2024-07-15 15:48:38.465466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.670 [2024-07-15 15:48:38.580503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.928 15:48:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:13.302 15:48:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.302 00:07:13.302 real 0m1.467s 00:07:13.302 user 0m1.319s 00:07:13.302 sys 0m0.148s 00:07:13.302 15:48:39 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.302 15:48:39 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.302 ************************************ 00:07:13.303 END TEST accel_copy 00:07:13.303 ************************************ 00:07:13.303 15:48:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.303 15:48:39 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.303 15:48:39 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:13.303 15:48:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.303 15:48:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.303 ************************************ 00:07:13.303 START TEST accel_fill 00:07:13.303 ************************************ 00:07:13.303 15:48:39 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:13.303 15:48:39 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:13.303 [2024-07-15 15:48:39.910380] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:13.303 [2024-07-15 15:48:39.910444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043654 ] 00:07:13.303 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.303 [2024-07-15 15:48:39.972094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.303 [2024-07-15 15:48:40.091602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.303 15:48:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:14.678 15:48:41 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.678 00:07:14.678 real 0m1.464s 00:07:14.678 user 0m1.328s 00:07:14.678 sys 0m0.137s 00:07:14.678 15:48:41 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.678 15:48:41 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:14.678 ************************************ 00:07:14.678 END TEST accel_fill 00:07:14.678 ************************************ 00:07:14.678 15:48:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.678 15:48:41 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:14.678 15:48:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:14.678 15:48:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.678 15:48:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.678 ************************************ 00:07:14.678 START TEST accel_copy_crc32c 00:07:14.678 ************************************ 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:14.678 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:14.678 [2024-07-15 15:48:41.413560] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:14.678 [2024-07-15 15:48:41.413622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043816 ] 00:07:14.678 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.678 [2024-07-15 15:48:41.475718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.678 [2024-07-15 15:48:41.594099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.938 15:48:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.311 00:07:16.311 real 0m1.475s 00:07:16.311 user 0m1.326s 00:07:16.311 sys 0m0.151s 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.311 15:48:42 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:16.311 ************************************ 00:07:16.311 END TEST accel_copy_crc32c 00:07:16.311 ************************************ 00:07:16.311 15:48:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.311 15:48:42 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:16.311 15:48:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:16.311 15:48:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.311 15:48:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.311 ************************************ 00:07:16.311 START TEST accel_copy_crc32c_C2 00:07:16.311 ************************************ 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:16.311 15:48:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:16.311 [2024-07-15 15:48:42.929703] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:16.311 [2024-07-15 15:48:42.929757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043969 ] 00:07:16.311 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.311 [2024-07-15 15:48:42.990465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.311 [2024-07-15 15:48:43.112710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.311 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.311 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.312 15:48:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.684 00:07:17.684 real 0m1.481s 00:07:17.684 user 0m1.342s 00:07:17.684 sys 0m0.141s 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.684 15:48:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:17.684 ************************************ 00:07:17.684 END TEST accel_copy_crc32c_C2 00:07:17.684 ************************************ 00:07:17.684 15:48:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.684 15:48:44 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:17.684 15:48:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:17.684 15:48:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.684 15:48:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.684 ************************************ 00:07:17.684 START TEST accel_dualcast 00:07:17.684 ************************************ 00:07:17.684 15:48:44 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:17.684 15:48:44 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:17.684 [2024-07-15 15:48:44.458488] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:17.684 [2024-07-15 15:48:44.458555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044242 ] 00:07:17.684 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.684 [2024-07-15 15:48:44.525065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.943 [2024-07-15 15:48:44.648415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.943 15:48:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:19.350 15:48:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.350 00:07:19.350 real 0m1.476s 00:07:19.350 user 0m1.339s 00:07:19.350 sys 0m0.139s 00:07:19.350 15:48:45 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.350 15:48:45 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:19.350 ************************************ 00:07:19.350 END TEST accel_dualcast 00:07:19.350 ************************************ 00:07:19.350 15:48:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.350 15:48:45 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:19.350 15:48:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:19.350 15:48:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.350 15:48:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.350 ************************************ 00:07:19.350 START TEST accel_compare 00:07:19.350 ************************************ 00:07:19.350 15:48:45 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:19.350 15:48:45 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:19.350 [2024-07-15 15:48:45.981558] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:19.350 [2024-07-15 15:48:45.981624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044406 ] 00:07:19.350 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.350 [2024-07-15 15:48:46.043957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.350 [2024-07-15 15:48:46.167144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.350 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.350 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.350 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.350 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.350 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.350 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.350 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.350 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.350 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:19.350 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.350 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.350 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.351 15:48:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.726 15:48:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.726 15:48:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.726 15:48:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.726 15:48:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.726 15:48:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:20.727 15:48:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.727 00:07:20.727 real 0m1.489s 00:07:20.727 user 0m1.333s 00:07:20.727 sys 0m0.157s 00:07:20.727 15:48:47 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.727 15:48:47 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:20.727 ************************************ 00:07:20.727 END TEST accel_compare 00:07:20.727 ************************************ 00:07:20.727 15:48:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.727 15:48:47 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:20.727 15:48:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:20.727 15:48:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.727 15:48:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.727 ************************************ 00:07:20.727 START TEST accel_xor 00:07:20.727 ************************************ 00:07:20.727 15:48:47 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:20.727 15:48:47 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:20.727 [2024-07-15 15:48:47.515037] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:20.727 [2024-07-15 15:48:47.515105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044563 ] 00:07:20.727 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.727 [2024-07-15 15:48:47.577291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.986 [2024-07-15 15:48:47.698679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.986 15:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:22.361 15:48:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.361 00:07:22.361 real 0m1.479s 00:07:22.361 user 0m1.342s 00:07:22.361 sys 0m0.139s 00:07:22.361 15:48:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.361 15:48:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:22.361 ************************************ 00:07:22.361 END TEST accel_xor 00:07:22.361 ************************************ 00:07:22.361 15:48:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.361 15:48:48 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:22.361 15:48:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:22.361 15:48:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.361 15:48:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.361 ************************************ 00:07:22.361 START TEST accel_xor 00:07:22.361 ************************************ 00:07:22.361 15:48:49 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:22.361 [2024-07-15 15:48:49.036826] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:22.361 [2024-07-15 15:48:49.036902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044836 ] 00:07:22.361 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.361 [2024-07-15 15:48:49.102630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.361 [2024-07-15 15:48:49.223652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.361 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.362 15:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:23.733 15:48:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.733 00:07:23.733 real 0m1.470s 00:07:23.733 user 0m1.331s 00:07:23.733 sys 0m0.140s 00:07:23.733 15:48:50 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.733 15:48:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:23.733 ************************************ 00:07:23.733 END TEST accel_xor 00:07:23.733 ************************************ 00:07:23.733 15:48:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.733 15:48:50 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:23.733 15:48:50 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:23.733 15:48:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.733 15:48:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.733 ************************************ 00:07:23.733 START TEST accel_dif_verify 00:07:23.733 ************************************ 00:07:23.733 15:48:50 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:23.733 15:48:50 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:23.733 15:48:50 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:23.733 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.734 15:48:50 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:23.734 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.734 15:48:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:23.734 15:48:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:23.734 15:48:50 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.734 15:48:50 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.734 15:48:50 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.734 15:48:50 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.734 15:48:50 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.734 15:48:50 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:23.734 15:48:50 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:23.734 [2024-07-15 15:48:50.553997] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:23.734 [2024-07-15 15:48:50.554063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044998 ] 00:07:23.734 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.734 [2024-07-15 15:48:50.615719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.991 [2024-07-15 15:48:50.738763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.992 15:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:25.362 15:48:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.362 00:07:25.362 real 0m1.489s 00:07:25.362 user 0m1.344s 00:07:25.362 sys 0m0.148s 00:07:25.362 15:48:52 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.362 15:48:52 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:25.362 ************************************ 00:07:25.362 END TEST accel_dif_verify 00:07:25.362 ************************************ 00:07:25.362 15:48:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.362 15:48:52 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:25.362 15:48:52 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:25.362 15:48:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.362 15:48:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.362 ************************************ 00:07:25.362 START TEST accel_dif_generate 00:07:25.362 ************************************ 00:07:25.362 15:48:52 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:25.362 15:48:52 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:25.362 [2024-07-15 15:48:52.086408] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:25.362 [2024-07-15 15:48:52.086474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045151 ] 00:07:25.362 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.362 [2024-07-15 15:48:52.147332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.362 [2024-07-15 15:48:52.270033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.620 15:48:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.991 15:48:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.991 15:48:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.991 15:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:26.992 15:48:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.992 00:07:26.992 real 0m1.475s 00:07:26.992 user 0m1.340s 00:07:26.992 sys 0m0.139s 00:07:26.992 15:48:53 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.992 15:48:53 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:26.992 ************************************ 00:07:26.992 END TEST accel_dif_generate 00:07:26.992 ************************************ 00:07:26.992 15:48:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.992 15:48:53 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:26.992 15:48:53 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:26.992 15:48:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.992 15:48:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.992 ************************************ 00:07:26.992 START TEST accel_dif_generate_copy 00:07:26.992 ************************************ 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:26.992 [2024-07-15 15:48:53.606770] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:26.992 [2024-07-15 15:48:53.606837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045428 ] 00:07:26.992 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.992 [2024-07-15 15:48:53.673092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.992 [2024-07-15 15:48:53.795835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.992 15:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.371 00:07:28.371 real 0m1.486s 00:07:28.371 user 0m1.345s 00:07:28.371 sys 0m0.144s 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.371 15:48:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:28.371 ************************************ 00:07:28.371 END TEST accel_dif_generate_copy 00:07:28.371 ************************************ 00:07:28.371 15:48:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.371 15:48:55 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:28.371 15:48:55 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.371 15:48:55 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:28.371 15:48:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.371 15:48:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.371 ************************************ 00:07:28.371 START TEST accel_comp 00:07:28.371 ************************************ 00:07:28.371 15:48:55 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:28.371 15:48:55 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:28.371 [2024-07-15 15:48:55.146852] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:28.371 [2024-07-15 15:48:55.147030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045587 ] 00:07:28.371 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.371 [2024-07-15 15:48:55.210339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.630 [2024-07-15 15:48:55.332316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.630 15:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:30.005 15:48:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.005 00:07:30.005 real 0m1.494s 00:07:30.005 user 0m1.348s 00:07:30.005 sys 0m0.150s 00:07:30.005 15:48:56 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.005 15:48:56 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:30.005 ************************************ 00:07:30.005 END TEST accel_comp 00:07:30.005 ************************************ 00:07:30.005 15:48:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.005 15:48:56 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:30.005 15:48:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:30.005 15:48:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.005 15:48:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.005 ************************************ 00:07:30.005 START TEST accel_decomp 00:07:30.005 ************************************ 00:07:30.005 15:48:56 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:30.005 15:48:56 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:30.005 [2024-07-15 15:48:56.686109] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:30.005 [2024-07-15 15:48:56.686174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045744 ] 00:07:30.005 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.005 [2024-07-15 15:48:56.748608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.005 [2024-07-15 15:48:56.872839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.264 15:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.639 15:48:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.639 00:07:31.639 real 0m1.487s 00:07:31.639 user 0m1.347s 00:07:31.639 sys 0m0.143s 00:07:31.639 15:48:58 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.639 15:48:58 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:31.639 ************************************ 00:07:31.639 END TEST accel_decomp 00:07:31.639 ************************************ 00:07:31.639 15:48:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.639 15:48:58 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.639 15:48:58 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:31.639 15:48:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.639 15:48:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.639 ************************************ 00:07:31.639 START TEST accel_decomp_full 00:07:31.639 ************************************ 00:07:31.639 15:48:58 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.639 15:48:58 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:31.639 15:48:58 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:31.639 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.639 15:48:58 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.639 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.639 15:48:58 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.639 15:48:58 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:31.640 [2024-07-15 15:48:58.218174] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:31.640 [2024-07-15 15:48:58.218239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046017 ] 00:07:31.640 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.640 [2024-07-15 15:48:58.282801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.640 [2024-07-15 15:48:58.405703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.640 15:48:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.015 15:48:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.016 15:48:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.016 00:07:33.016 real 0m1.502s 00:07:33.016 user 0m1.361s 00:07:33.016 sys 0m0.144s 00:07:33.016 15:48:59 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.016 15:48:59 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:33.016 ************************************ 00:07:33.016 END TEST accel_decomp_full 00:07:33.016 ************************************ 00:07:33.016 15:48:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.016 15:48:59 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.016 15:48:59 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:33.016 15:48:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.016 15:48:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.016 ************************************ 00:07:33.016 START TEST accel_decomp_mcore 00:07:33.016 ************************************ 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:33.016 15:48:59 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:33.016 [2024-07-15 15:48:59.771807] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:33.016 [2024-07-15 15:48:59.771888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046179 ] 00:07:33.016 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.016 [2024-07-15 15:48:59.836223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.275 [2024-07-15 15:48:59.961847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.275 [2024-07-15 15:48:59.961909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.275 [2024-07-15 15:48:59.961936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.275 [2024-07-15 15:48:59.961940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.275 15:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.659 00:07:34.659 real 0m1.503s 00:07:34.659 user 0m4.819s 00:07:34.659 sys 0m0.157s 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.659 15:49:01 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:34.659 ************************************ 00:07:34.659 END TEST accel_decomp_mcore 00:07:34.659 ************************************ 00:07:34.659 15:49:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.659 15:49:01 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.659 15:49:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:34.659 15:49:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.659 15:49:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.659 ************************************ 00:07:34.659 START TEST accel_decomp_full_mcore 00:07:34.659 ************************************ 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:34.659 [2024-07-15 15:49:01.322791] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:34.659 [2024-07-15 15:49:01.322857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046356 ] 00:07:34.659 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.659 [2024-07-15 15:49:01.384981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.659 [2024-07-15 15:49:01.507603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.659 [2024-07-15 15:49:01.507656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.659 [2024-07-15 15:49:01.507707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.659 [2024-07-15 15:49:01.507711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.659 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.660 15:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.070 00:07:36.070 real 0m1.507s 00:07:36.070 user 0m4.858s 00:07:36.070 sys 0m0.148s 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.070 15:49:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:36.070 ************************************ 00:07:36.070 END TEST accel_decomp_full_mcore 00:07:36.070 ************************************ 00:07:36.070 15:49:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.071 15:49:02 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.071 15:49:02 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:36.071 15:49:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.071 15:49:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.071 ************************************ 00:07:36.071 START TEST accel_decomp_mthread 00:07:36.071 ************************************ 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:36.071 15:49:02 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:36.071 [2024-07-15 15:49:02.882129] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:36.071 [2024-07-15 15:49:02.882194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046617 ] 00:07:36.071 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.071 [2024-07-15 15:49:02.945570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.330 [2024-07-15 15:49:03.066109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.330 15:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.712 15:49:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.712 00:07:37.712 real 0m1.496s 00:07:37.712 user 0m1.353s 00:07:37.712 sys 0m0.146s 00:07:37.713 15:49:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.713 15:49:04 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:37.713 ************************************ 00:07:37.713 END TEST accel_decomp_mthread 00:07:37.713 ************************************ 00:07:37.713 15:49:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.713 15:49:04 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.713 15:49:04 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:37.713 15:49:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.713 15:49:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.713 ************************************ 00:07:37.713 START TEST accel_decomp_full_mthread 00:07:37.713 ************************************ 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:37.713 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:37.713 [2024-07-15 15:49:04.427683] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:37.713 [2024-07-15 15:49:04.427749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046774 ] 00:07:37.713 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.713 [2024-07-15 15:49:04.491041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.713 [2024-07-15 15:49:04.613424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.972 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.973 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.973 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.973 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.973 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.973 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.973 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.973 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.973 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.973 15:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.354 00:07:39.354 real 0m1.531s 00:07:39.354 user 0m1.379s 00:07:39.354 sys 0m0.155s 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.354 15:49:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:39.354 ************************************ 00:07:39.354 END TEST accel_decomp_full_mthread 00:07:39.354 ************************************ 00:07:39.354 15:49:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.354 15:49:05 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:39.354 15:49:05 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:39.354 15:49:05 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:39.354 15:49:05 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:39.354 15:49:05 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.354 15:49:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.354 15:49:05 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.354 15:49:05 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.354 15:49:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.354 15:49:05 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.354 15:49:05 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.354 15:49:05 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:39.354 15:49:05 accel -- accel/accel.sh@41 -- # jq -r . 00:07:39.354 ************************************ 00:07:39.354 START TEST accel_dif_functional_tests 00:07:39.354 ************************************ 00:07:39.354 15:49:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:39.354 [2024-07-15 15:49:06.027420] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:39.354 [2024-07-15 15:49:06.027491] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047051 ] 00:07:39.354 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.354 [2024-07-15 15:49:06.090990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.354 [2024-07-15 15:49:06.216280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.354 [2024-07-15 15:49:06.216332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.354 [2024-07-15 15:49:06.216336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.614 00:07:39.614 00:07:39.614 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.614 http://cunit.sourceforge.net/ 00:07:39.614 00:07:39.614 00:07:39.614 Suite: accel_dif 00:07:39.614 Test: verify: DIF generated, GUARD check ...passed 00:07:39.615 Test: verify: DIF generated, APPTAG check ...passed 00:07:39.615 Test: verify: DIF generated, REFTAG check ...passed 00:07:39.615 Test: verify: DIF not generated, GUARD check ...[2024-07-15 15:49:06.316662] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:39.615 passed 00:07:39.615 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 15:49:06.316731] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:39.615 passed 00:07:39.615 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 15:49:06.316763] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:39.615 passed 00:07:39.615 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:39.615 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 15:49:06.316825] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:39.615 passed 00:07:39.615 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:39.615 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:39.615 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:39.615 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 15:49:06.316994] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:39.615 passed 00:07:39.615 Test: verify copy: DIF generated, GUARD check ...passed 00:07:39.615 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:39.615 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:39.615 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 15:49:06.317163] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:39.615 passed 00:07:39.615 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 15:49:06.317228] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:39.615 passed 00:07:39.615 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 15:49:06.317262] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:39.615 passed 00:07:39.615 Test: generate copy: DIF generated, GUARD check ...passed 00:07:39.615 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:39.615 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:39.615 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:39.615 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:39.615 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:39.615 Test: generate copy: iovecs-len validate ...[2024-07-15 15:49:06.317483] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:39.615 passed 00:07:39.615 Test: generate copy: buffer alignment validate ...passed 00:07:39.615 00:07:39.615 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.615 suites 1 1 n/a 0 0 00:07:39.615 tests 26 26 26 0 0 00:07:39.615 asserts 115 115 115 0 n/a 00:07:39.615 00:07:39.615 Elapsed time = 0.003 seconds 00:07:39.874 00:07:39.874 real 0m0.598s 00:07:39.874 user 0m0.898s 00:07:39.874 sys 0m0.183s 00:07:39.874 15:49:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.874 15:49:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:39.874 ************************************ 00:07:39.874 END TEST accel_dif_functional_tests 00:07:39.874 ************************************ 00:07:39.874 15:49:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.874 00:07:39.874 real 0m33.583s 00:07:39.874 user 0m37.052s 00:07:39.874 sys 0m4.591s 00:07:39.874 15:49:06 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.874 15:49:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.874 ************************************ 00:07:39.874 END TEST accel 00:07:39.874 ************************************ 00:07:39.874 15:49:06 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.874 15:49:06 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:39.874 15:49:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.874 15:49:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.874 15:49:06 -- common/autotest_common.sh@10 -- # set +x 00:07:39.874 ************************************ 00:07:39.874 START TEST accel_rpc 00:07:39.874 ************************************ 00:07:39.874 15:49:06 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:39.874 * Looking for test storage... 00:07:39.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:39.874 15:49:06 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:39.874 15:49:06 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1047127 00:07:39.874 15:49:06 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:39.874 15:49:06 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1047127 00:07:39.874 15:49:06 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1047127 ']' 00:07:39.874 15:49:06 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.874 15:49:06 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.874 15:49:06 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.874 15:49:06 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.874 15:49:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.874 [2024-07-15 15:49:06.763499] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:39.874 [2024-07-15 15:49:06.763599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047127 ] 00:07:39.874 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.133 [2024-07-15 15:49:06.821399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.134 [2024-07-15 15:49:06.926604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.134 15:49:06 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.134 15:49:06 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:40.134 15:49:06 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:40.134 15:49:06 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:40.134 15:49:06 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:40.134 15:49:06 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:40.134 15:49:06 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:40.134 15:49:06 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:40.134 15:49:06 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.134 15:49:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.134 ************************************ 00:07:40.134 START TEST accel_assign_opcode 00:07:40.134 ************************************ 00:07:40.134 15:49:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:40.134 15:49:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:40.134 15:49:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.134 15:49:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.134 [2024-07-15 15:49:06.983232] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:40.134 15:49:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.134 15:49:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:40.134 15:49:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.134 15:49:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.134 [2024-07-15 15:49:06.991237] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:40.134 15:49:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.134 15:49:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:40.134 15:49:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.134 15:49:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.392 15:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.392 15:49:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:40.392 15:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.392 15:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.392 15:49:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:40.392 15:49:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:40.392 15:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.392 software 00:07:40.392 00:07:40.392 real 0m0.301s 00:07:40.392 user 0m0.039s 00:07:40.392 sys 0m0.008s 00:07:40.392 15:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.392 15:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.392 ************************************ 00:07:40.392 END TEST accel_assign_opcode 00:07:40.392 ************************************ 00:07:40.392 15:49:07 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:40.392 15:49:07 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1047127 00:07:40.392 15:49:07 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1047127 ']' 00:07:40.392 15:49:07 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1047127 00:07:40.392 15:49:07 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:40.392 15:49:07 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.392 15:49:07 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1047127 00:07:40.652 15:49:07 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:40.652 15:49:07 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:40.652 15:49:07 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1047127' 00:07:40.652 killing process with pid 1047127 00:07:40.652 15:49:07 accel_rpc -- common/autotest_common.sh@967 -- # kill 1047127 00:07:40.652 15:49:07 accel_rpc -- common/autotest_common.sh@972 -- # wait 1047127 00:07:40.911 00:07:40.911 real 0m1.166s 00:07:40.911 user 0m1.098s 00:07:40.911 sys 0m0.421s 00:07:40.911 15:49:07 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.911 15:49:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.911 ************************************ 00:07:40.911 END TEST accel_rpc 00:07:40.911 ************************************ 00:07:41.170 15:49:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:41.170 15:49:07 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:41.170 15:49:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.170 15:49:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.170 15:49:07 -- common/autotest_common.sh@10 -- # set +x 00:07:41.170 ************************************ 00:07:41.170 START TEST app_cmdline 00:07:41.170 ************************************ 00:07:41.170 15:49:07 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:41.170 * Looking for test storage... 00:07:41.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:41.170 15:49:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:41.170 15:49:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1047331 00:07:41.170 15:49:07 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:41.170 15:49:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1047331 00:07:41.170 15:49:07 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1047331 ']' 00:07:41.170 15:49:07 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.170 15:49:07 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.170 15:49:07 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.170 15:49:07 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.170 15:49:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:41.171 [2024-07-15 15:49:07.977631] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:41.171 [2024-07-15 15:49:07.977729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047331 ] 00:07:41.171 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.171 [2024-07-15 15:49:08.044662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.429 [2024-07-15 15:49:08.167742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.000 15:49:08 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.000 15:49:08 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:42.000 15:49:08 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:42.258 { 00:07:42.258 "version": "SPDK v24.09-pre git sha1 a95bbf233", 00:07:42.258 "fields": { 00:07:42.258 "major": 24, 00:07:42.258 "minor": 9, 00:07:42.258 "patch": 0, 00:07:42.258 "suffix": "-pre", 00:07:42.258 "commit": "a95bbf233" 00:07:42.258 } 00:07:42.258 } 00:07:42.258 15:49:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:42.258 15:49:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:42.258 15:49:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:42.258 15:49:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:42.258 15:49:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:42.258 15:49:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:42.258 15:49:09 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.258 15:49:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:42.258 15:49:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:42.258 15:49:09 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.517 15:49:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:42.517 15:49:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:42.517 15:49:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.517 15:49:09 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:42.517 15:49:09 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.517 15:49:09 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.517 15:49:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.517 15:49:09 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.517 15:49:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.517 15:49:09 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.517 15:49:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.517 15:49:09 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.517 15:49:09 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:42.517 15:49:09 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.517 request: 00:07:42.517 { 00:07:42.517 "method": "env_dpdk_get_mem_stats", 00:07:42.517 "req_id": 1 00:07:42.517 } 00:07:42.517 Got JSON-RPC error response 00:07:42.517 response: 00:07:42.517 { 00:07:42.517 "code": -32601, 00:07:42.517 "message": "Method not found" 00:07:42.517 } 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.776 15:49:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1047331 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1047331 ']' 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1047331 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1047331 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1047331' 00:07:42.776 killing process with pid 1047331 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@967 -- # kill 1047331 00:07:42.776 15:49:09 app_cmdline -- common/autotest_common.sh@972 -- # wait 1047331 00:07:43.038 00:07:43.038 real 0m2.095s 00:07:43.038 user 0m2.630s 00:07:43.038 sys 0m0.492s 00:07:43.038 15:49:09 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.038 15:49:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.297 ************************************ 00:07:43.297 END TEST app_cmdline 00:07:43.297 ************************************ 00:07:43.297 15:49:09 -- common/autotest_common.sh@1142 -- # return 0 00:07:43.297 15:49:09 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:43.297 15:49:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.297 15:49:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.297 15:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:43.297 ************************************ 00:07:43.297 START TEST version 00:07:43.297 ************************************ 00:07:43.297 15:49:10 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:43.297 * Looking for test storage... 00:07:43.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:43.297 15:49:10 version -- app/version.sh@17 -- # get_header_version major 00:07:43.297 15:49:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.297 15:49:10 version -- app/version.sh@14 -- # cut -f2 00:07:43.297 15:49:10 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.297 15:49:10 version -- app/version.sh@17 -- # major=24 00:07:43.297 15:49:10 version -- app/version.sh@18 -- # get_header_version minor 00:07:43.297 15:49:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.297 15:49:10 version -- app/version.sh@14 -- # cut -f2 00:07:43.297 15:49:10 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.297 15:49:10 version -- app/version.sh@18 -- # minor=9 00:07:43.297 15:49:10 version -- app/version.sh@19 -- # get_header_version patch 00:07:43.297 15:49:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.297 15:49:10 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.297 15:49:10 version -- app/version.sh@14 -- # cut -f2 00:07:43.297 15:49:10 version -- app/version.sh@19 -- # patch=0 00:07:43.297 15:49:10 version -- app/version.sh@20 -- # get_header_version suffix 00:07:43.297 15:49:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.297 15:49:10 version -- app/version.sh@14 -- # cut -f2 00:07:43.297 15:49:10 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.297 15:49:10 version -- app/version.sh@20 -- # suffix=-pre 00:07:43.297 15:49:10 version -- app/version.sh@22 -- # version=24.9 00:07:43.297 15:49:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:43.297 15:49:10 version -- app/version.sh@28 -- # version=24.9rc0 00:07:43.297 15:49:10 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:43.297 15:49:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:43.297 15:49:10 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:43.297 15:49:10 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:43.297 00:07:43.297 real 0m0.110s 00:07:43.297 user 0m0.065s 00:07:43.297 sys 0m0.068s 00:07:43.297 15:49:10 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.297 15:49:10 version -- common/autotest_common.sh@10 -- # set +x 00:07:43.297 ************************************ 00:07:43.297 END TEST version 00:07:43.297 ************************************ 00:07:43.297 15:49:10 -- common/autotest_common.sh@1142 -- # return 0 00:07:43.297 15:49:10 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:43.297 15:49:10 -- spdk/autotest.sh@198 -- # uname -s 00:07:43.297 15:49:10 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:43.297 15:49:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:43.297 15:49:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:43.297 15:49:10 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:43.297 15:49:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:43.297 15:49:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:43.297 15:49:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.297 15:49:10 -- common/autotest_common.sh@10 -- # set +x 00:07:43.297 15:49:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:43.297 15:49:10 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:43.297 15:49:10 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:43.297 15:49:10 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:43.297 15:49:10 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:43.297 15:49:10 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:43.297 15:49:10 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:43.297 15:49:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.297 15:49:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.297 15:49:10 -- common/autotest_common.sh@10 -- # set +x 00:07:43.297 ************************************ 00:07:43.297 START TEST nvmf_tcp 00:07:43.297 ************************************ 00:07:43.297 15:49:10 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:43.555 * Looking for test storage... 00:07:43.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.555 15:49:10 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.556 15:49:10 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.556 15:49:10 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.556 15:49:10 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.556 15:49:10 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.556 15:49:10 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.556 15:49:10 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.556 15:49:10 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:43.556 15:49:10 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:43.556 15:49:10 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.556 15:49:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:43.556 15:49:10 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:43.556 15:49:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.556 15:49:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.556 15:49:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.556 ************************************ 00:07:43.556 START TEST nvmf_example 00:07:43.556 ************************************ 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:43.556 * Looking for test storage... 00:07:43.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.556 15:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:45.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:45.457 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:45.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:45.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.457 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.716 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.716 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:45.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:07:45.717 00:07:45.717 --- 10.0.0.2 ping statistics --- 00:07:45.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.717 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:07:45.717 00:07:45.717 --- 10.0.0.1 ping statistics --- 00:07:45.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.717 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1049354 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1049354 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1049354 ']' 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.717 15:49:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.717 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.651 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.651 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:46.651 15:49:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:46.651 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.651 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.651 15:49:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.651 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.651 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.651 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.651 15:49:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:46.651 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.651 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:46.908 15:49:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:46.908 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.872 Initializing NVMe Controllers 00:07:56.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:56.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:56.872 Initialization complete. Launching workers. 00:07:56.872 ======================================================== 00:07:56.872 Latency(us) 00:07:56.872 Device Information : IOPS MiB/s Average min max 00:07:56.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14682.75 57.35 4358.39 886.89 15873.90 00:07:56.872 ======================================================== 00:07:56.872 Total : 14682.75 57.35 4358.39 886.89 15873.90 00:07:56.872 00:07:56.872 15:49:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:56.872 15:49:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:56.872 15:49:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:56.872 15:49:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:56.872 15:49:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:56.872 15:49:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:56.872 15:49:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:56.872 15:49:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.141 rmmod nvme_tcp 00:07:57.141 rmmod nvme_fabrics 00:07:57.141 rmmod nvme_keyring 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1049354 ']' 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1049354 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1049354 ']' 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1049354 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1049354 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1049354' 00:07:57.141 killing process with pid 1049354 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1049354 00:07:57.141 15:49:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1049354 00:07:57.435 nvmf threads initialize successfully 00:07:57.435 bdev subsystem init successfully 00:07:57.435 created a nvmf target service 00:07:57.435 create targets's poll groups done 00:07:57.435 all subsystems of target started 00:07:57.435 nvmf target is running 00:07:57.435 all subsystems of target stopped 00:07:57.435 destroy targets's poll groups done 00:07:57.435 destroyed the nvmf target service 00:07:57.435 bdev subsystem finish successfully 00:07:57.435 nvmf threads destroy successfully 00:07:57.435 15:49:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:57.435 15:49:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:57.435 15:49:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:57.435 15:49:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.435 15:49:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:57.435 15:49:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.435 15:49:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.435 15:49:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.338 15:49:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:59.338 15:49:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:59.338 15:49:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.338 15:49:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:59.338 00:07:59.338 real 0m15.909s 00:07:59.338 user 0m44.408s 00:07:59.338 sys 0m3.631s 00:07:59.338 15:49:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.338 15:49:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:59.338 ************************************ 00:07:59.338 END TEST nvmf_example 00:07:59.338 ************************************ 00:07:59.338 15:49:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:59.338 15:49:26 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:59.338 15:49:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:59.338 15:49:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.338 15:49:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:59.338 ************************************ 00:07:59.338 START TEST nvmf_filesystem 00:07:59.338 ************************************ 00:07:59.338 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:59.600 * Looking for test storage... 00:07:59.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.600 15:49:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:59.600 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:59.600 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:59.600 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:59.600 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:59.601 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:59.601 #define SPDK_CONFIG_H 00:07:59.601 #define SPDK_CONFIG_APPS 1 00:07:59.601 #define SPDK_CONFIG_ARCH native 00:07:59.601 #undef SPDK_CONFIG_ASAN 00:07:59.601 #undef SPDK_CONFIG_AVAHI 00:07:59.601 #undef SPDK_CONFIG_CET 00:07:59.601 #define SPDK_CONFIG_COVERAGE 1 00:07:59.601 #define SPDK_CONFIG_CROSS_PREFIX 00:07:59.601 #undef SPDK_CONFIG_CRYPTO 00:07:59.601 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:59.601 #undef SPDK_CONFIG_CUSTOMOCF 00:07:59.601 #undef SPDK_CONFIG_DAOS 00:07:59.601 #define SPDK_CONFIG_DAOS_DIR 00:07:59.601 #define SPDK_CONFIG_DEBUG 1 00:07:59.601 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:59.601 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:59.601 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:59.601 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:59.601 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:59.601 #undef SPDK_CONFIG_DPDK_UADK 00:07:59.601 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:59.601 #define SPDK_CONFIG_EXAMPLES 1 00:07:59.601 #undef SPDK_CONFIG_FC 00:07:59.601 #define SPDK_CONFIG_FC_PATH 00:07:59.601 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:59.601 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:59.601 #undef SPDK_CONFIG_FUSE 00:07:59.601 #undef SPDK_CONFIG_FUZZER 00:07:59.601 #define SPDK_CONFIG_FUZZER_LIB 00:07:59.601 #undef SPDK_CONFIG_GOLANG 00:07:59.601 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:59.601 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:59.601 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:59.601 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:59.601 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:59.601 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:59.601 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:59.601 #define SPDK_CONFIG_IDXD 1 00:07:59.601 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:59.602 #undef SPDK_CONFIG_IPSEC_MB 00:07:59.602 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:59.602 #define SPDK_CONFIG_ISAL 1 00:07:59.602 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:59.602 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:59.602 #define SPDK_CONFIG_LIBDIR 00:07:59.602 #undef SPDK_CONFIG_LTO 00:07:59.602 #define SPDK_CONFIG_MAX_LCORES 128 00:07:59.602 #define SPDK_CONFIG_NVME_CUSE 1 00:07:59.602 #undef SPDK_CONFIG_OCF 00:07:59.602 #define SPDK_CONFIG_OCF_PATH 00:07:59.602 #define SPDK_CONFIG_OPENSSL_PATH 00:07:59.602 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:59.602 #define SPDK_CONFIG_PGO_DIR 00:07:59.602 #undef SPDK_CONFIG_PGO_USE 00:07:59.602 #define SPDK_CONFIG_PREFIX /usr/local 00:07:59.602 #undef SPDK_CONFIG_RAID5F 00:07:59.602 #undef SPDK_CONFIG_RBD 00:07:59.602 #define SPDK_CONFIG_RDMA 1 00:07:59.602 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:59.602 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:59.602 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:59.602 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:59.602 #define SPDK_CONFIG_SHARED 1 00:07:59.602 #undef SPDK_CONFIG_SMA 00:07:59.602 #define SPDK_CONFIG_TESTS 1 00:07:59.602 #undef SPDK_CONFIG_TSAN 00:07:59.602 #define SPDK_CONFIG_UBLK 1 00:07:59.602 #define SPDK_CONFIG_UBSAN 1 00:07:59.602 #undef SPDK_CONFIG_UNIT_TESTS 00:07:59.602 #undef SPDK_CONFIG_URING 00:07:59.602 #define SPDK_CONFIG_URING_PATH 00:07:59.602 #undef SPDK_CONFIG_URING_ZNS 00:07:59.602 #undef SPDK_CONFIG_USDT 00:07:59.602 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:59.602 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:59.602 #define SPDK_CONFIG_VFIO_USER 1 00:07:59.602 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:59.602 #define SPDK_CONFIG_VHOST 1 00:07:59.602 #define SPDK_CONFIG_VIRTIO 1 00:07:59.602 #undef SPDK_CONFIG_VTUNE 00:07:59.602 #define SPDK_CONFIG_VTUNE_DIR 00:07:59.602 #define SPDK_CONFIG_WERROR 1 00:07:59.602 #define SPDK_CONFIG_WPDK_DIR 00:07:59.602 #undef SPDK_CONFIG_XNVME 00:07:59.602 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:59.602 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:59.603 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1051144 ]] 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1051144 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.qVFqFw 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.qVFqFw/tests/target /tmp/spdk.qVFqFw 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55496622080 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6498070528 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996312064 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1036288 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:59.604 * Looking for test storage... 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55496622080 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8712663040 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.604 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:59.605 15:49:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.518 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.518 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:01.518 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:01.518 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:01.518 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:01.518 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:01.518 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:01.518 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.777 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:01.778 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:01.778 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:01.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:01.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:01.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:08:01.778 00:08:01.778 --- 10.0.0.2 ping statistics --- 00:08:01.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.778 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:01.778 00:08:01.778 --- 10.0.0.1 ping statistics --- 00:08:01.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.778 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.778 ************************************ 00:08:01.778 START TEST nvmf_filesystem_no_in_capsule 00:08:01.778 ************************************ 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1052808 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1052808 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1052808 ']' 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.778 15:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.037 [2024-07-15 15:49:28.712647] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:02.037 [2024-07-15 15:49:28.712730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.037 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.037 [2024-07-15 15:49:28.781942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.037 [2024-07-15 15:49:28.905507] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.037 [2024-07-15 15:49:28.905580] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.037 [2024-07-15 15:49:28.905597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.037 [2024-07-15 15:49:28.905611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.037 [2024-07-15 15:49:28.905622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.037 [2024-07-15 15:49:28.905709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.037 [2024-07-15 15:49:28.905766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.037 [2024-07-15 15:49:28.905820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.037 [2024-07-15 15:49:28.905823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.967 [2024-07-15 15:49:29.723138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.967 Malloc1 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.967 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.224 [2024-07-15 15:49:29.909324] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.224 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:03.224 { 00:08:03.224 "name": "Malloc1", 00:08:03.224 "aliases": [ 00:08:03.224 "7048227f-8483-4e96-a3fa-e9487dbd5d27" 00:08:03.224 ], 00:08:03.224 "product_name": "Malloc disk", 00:08:03.224 "block_size": 512, 00:08:03.224 "num_blocks": 1048576, 00:08:03.224 "uuid": "7048227f-8483-4e96-a3fa-e9487dbd5d27", 00:08:03.224 "assigned_rate_limits": { 00:08:03.224 "rw_ios_per_sec": 0, 00:08:03.224 "rw_mbytes_per_sec": 0, 00:08:03.224 "r_mbytes_per_sec": 0, 00:08:03.224 "w_mbytes_per_sec": 0 00:08:03.224 }, 00:08:03.224 "claimed": true, 00:08:03.224 "claim_type": "exclusive_write", 00:08:03.225 "zoned": false, 00:08:03.225 "supported_io_types": { 00:08:03.225 "read": true, 00:08:03.225 "write": true, 00:08:03.225 "unmap": true, 00:08:03.225 "flush": true, 00:08:03.225 "reset": true, 00:08:03.225 "nvme_admin": false, 00:08:03.225 "nvme_io": false, 00:08:03.225 "nvme_io_md": false, 00:08:03.225 "write_zeroes": true, 00:08:03.225 "zcopy": true, 00:08:03.225 "get_zone_info": false, 00:08:03.225 "zone_management": false, 00:08:03.225 "zone_append": false, 00:08:03.225 "compare": false, 00:08:03.225 "compare_and_write": false, 00:08:03.225 "abort": true, 00:08:03.225 "seek_hole": false, 00:08:03.225 "seek_data": false, 00:08:03.225 "copy": true, 00:08:03.225 "nvme_iov_md": false 00:08:03.225 }, 00:08:03.225 "memory_domains": [ 00:08:03.225 { 00:08:03.225 "dma_device_id": "system", 00:08:03.225 "dma_device_type": 1 00:08:03.225 }, 00:08:03.225 { 00:08:03.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.225 "dma_device_type": 2 00:08:03.225 } 00:08:03.225 ], 00:08:03.225 "driver_specific": {} 00:08:03.225 } 00:08:03.225 ]' 00:08:03.225 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:03.225 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:03.225 15:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:03.225 15:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:03.225 15:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:03.225 15:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:03.225 15:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:03.225 15:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:03.788 15:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:03.788 15:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:03.788 15:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:03.788 15:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:03.788 15:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:06.311 15:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:06.875 15:49:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.807 ************************************ 00:08:07.807 START TEST filesystem_ext4 00:08:07.807 ************************************ 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:07.807 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:07.808 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:07.808 mke2fs 1.46.5 (30-Dec-2021) 00:08:08.065 Discarding device blocks: 0/522240 done 00:08:08.065 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:08.065 Filesystem UUID: 87a55041-55d9-41d2-b0f3-d9dfe740b953 00:08:08.065 Superblock backups stored on blocks: 00:08:08.065 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:08.065 00:08:08.065 Allocating group tables: 0/64 done 00:08:08.065 Writing inode tables: 0/64 done 00:08:08.065 Creating journal (8192 blocks): done 00:08:08.065 Writing superblocks and filesystem accounting information: 0/64 done 00:08:08.065 00:08:08.065 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:08.065 15:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1052808 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.999 00:08:08.999 real 0m1.090s 00:08:08.999 user 0m0.019s 00:08:08.999 sys 0m0.057s 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:08.999 ************************************ 00:08:08.999 END TEST filesystem_ext4 00:08:08.999 ************************************ 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.999 ************************************ 00:08:08.999 START TEST filesystem_btrfs 00:08:08.999 ************************************ 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:08.999 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:09.257 btrfs-progs v6.6.2 00:08:09.257 See https://btrfs.readthedocs.io for more information. 00:08:09.257 00:08:09.257 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:09.257 NOTE: several default settings have changed in version 5.15, please make sure 00:08:09.257 this does not affect your deployments: 00:08:09.257 - DUP for metadata (-m dup) 00:08:09.257 - enabled no-holes (-O no-holes) 00:08:09.257 - enabled free-space-tree (-R free-space-tree) 00:08:09.257 00:08:09.257 Label: (null) 00:08:09.257 UUID: 4ccfd1d3-2da9-4db8-9238-2140fadab444 00:08:09.257 Node size: 16384 00:08:09.257 Sector size: 4096 00:08:09.257 Filesystem size: 510.00MiB 00:08:09.257 Block group profiles: 00:08:09.257 Data: single 8.00MiB 00:08:09.257 Metadata: DUP 32.00MiB 00:08:09.257 System: DUP 8.00MiB 00:08:09.257 SSD detected: yes 00:08:09.257 Zoned device: no 00:08:09.257 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:09.257 Runtime features: free-space-tree 00:08:09.257 Checksum: crc32c 00:08:09.257 Number of devices: 1 00:08:09.257 Devices: 00:08:09.257 ID SIZE PATH 00:08:09.257 1 510.00MiB /dev/nvme0n1p1 00:08:09.257 00:08:09.257 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:09.257 15:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1052808 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:10.192 00:08:10.192 real 0m1.170s 00:08:10.192 user 0m0.011s 00:08:10.192 sys 0m0.129s 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:10.192 ************************************ 00:08:10.192 END TEST filesystem_btrfs 00:08:10.192 ************************************ 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.192 ************************************ 00:08:10.192 START TEST filesystem_xfs 00:08:10.192 ************************************ 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:10.192 15:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:10.192 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:10.192 = sectsz=512 attr=2, projid32bit=1 00:08:10.192 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:10.192 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:10.192 data = bsize=4096 blocks=130560, imaxpct=25 00:08:10.192 = sunit=0 swidth=0 blks 00:08:10.192 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:10.192 log =internal log bsize=4096 blocks=16384, version=2 00:08:10.192 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:10.192 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:11.125 Discarding blocks...Done. 00:08:11.125 15:49:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:11.125 15:49:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1052808 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.023 00:08:13.023 real 0m2.873s 00:08:13.023 user 0m0.016s 00:08:13.023 sys 0m0.065s 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:13.023 ************************************ 00:08:13.023 END TEST filesystem_xfs 00:08:13.023 ************************************ 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:13.023 15:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:13.280 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:13.281 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:13.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1052808 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1052808 ']' 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1052808 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1052808 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1052808' 00:08:13.539 killing process with pid 1052808 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1052808 00:08:13.539 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1052808 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:14.106 00:08:14.106 real 0m12.125s 00:08:14.106 user 0m46.524s 00:08:14.106 sys 0m1.877s 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.106 ************************************ 00:08:14.106 END TEST nvmf_filesystem_no_in_capsule 00:08:14.106 ************************************ 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.106 ************************************ 00:08:14.106 START TEST nvmf_filesystem_in_capsule 00:08:14.106 ************************************ 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.106 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1054378 00:08:14.107 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.107 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1054378 00:08:14.107 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1054378 ']' 00:08:14.107 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.107 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.107 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.107 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.107 15:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.107 [2024-07-15 15:49:40.895112] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:14.107 [2024-07-15 15:49:40.895200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.107 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.107 [2024-07-15 15:49:40.962492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.365 [2024-07-15 15:49:41.084075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.365 [2024-07-15 15:49:41.084129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.365 [2024-07-15 15:49:41.084146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.365 [2024-07-15 15:49:41.084166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.365 [2024-07-15 15:49:41.084186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.365 [2024-07-15 15:49:41.084291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.365 [2024-07-15 15:49:41.084339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.365 [2024-07-15 15:49:41.084490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.365 [2024-07-15 15:49:41.084494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.961 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.961 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:14.961 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.961 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.961 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.961 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.961 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:14.961 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:14.961 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.961 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.961 [2024-07-15 15:49:41.888756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.219 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.219 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:15.219 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.219 15:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.219 Malloc1 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.219 [2024-07-15 15:49:42.076445] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.219 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:15.219 { 00:08:15.219 "name": "Malloc1", 00:08:15.219 "aliases": [ 00:08:15.219 "955a2851-7aa2-4e07-8db3-2096eab88d65" 00:08:15.219 ], 00:08:15.219 "product_name": "Malloc disk", 00:08:15.219 "block_size": 512, 00:08:15.219 "num_blocks": 1048576, 00:08:15.219 "uuid": "955a2851-7aa2-4e07-8db3-2096eab88d65", 00:08:15.219 "assigned_rate_limits": { 00:08:15.219 "rw_ios_per_sec": 0, 00:08:15.219 "rw_mbytes_per_sec": 0, 00:08:15.219 "r_mbytes_per_sec": 0, 00:08:15.219 "w_mbytes_per_sec": 0 00:08:15.219 }, 00:08:15.219 "claimed": true, 00:08:15.219 "claim_type": "exclusive_write", 00:08:15.219 "zoned": false, 00:08:15.219 "supported_io_types": { 00:08:15.219 "read": true, 00:08:15.219 "write": true, 00:08:15.219 "unmap": true, 00:08:15.219 "flush": true, 00:08:15.219 "reset": true, 00:08:15.219 "nvme_admin": false, 00:08:15.219 "nvme_io": false, 00:08:15.219 "nvme_io_md": false, 00:08:15.219 "write_zeroes": true, 00:08:15.219 "zcopy": true, 00:08:15.219 "get_zone_info": false, 00:08:15.219 "zone_management": false, 00:08:15.219 "zone_append": false, 00:08:15.219 "compare": false, 00:08:15.219 "compare_and_write": false, 00:08:15.219 "abort": true, 00:08:15.219 "seek_hole": false, 00:08:15.220 "seek_data": false, 00:08:15.220 "copy": true, 00:08:15.220 "nvme_iov_md": false 00:08:15.220 }, 00:08:15.220 "memory_domains": [ 00:08:15.220 { 00:08:15.220 "dma_device_id": "system", 00:08:15.220 "dma_device_type": 1 00:08:15.220 }, 00:08:15.220 { 00:08:15.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.220 "dma_device_type": 2 00:08:15.220 } 00:08:15.220 ], 00:08:15.220 "driver_specific": {} 00:08:15.220 } 00:08:15.220 ]' 00:08:15.220 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:15.220 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:15.220 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:15.477 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:15.477 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:15.477 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:15.477 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:15.477 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:16.042 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:16.042 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:16.042 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:16.042 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:16.042 15:49:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:17.938 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:18.196 15:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:19.127 15:49:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.061 ************************************ 00:08:20.061 START TEST filesystem_in_capsule_ext4 00:08:20.061 ************************************ 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:20.061 15:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:20.061 mke2fs 1.46.5 (30-Dec-2021) 00:08:20.061 Discarding device blocks: 0/522240 done 00:08:20.061 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:20.061 Filesystem UUID: f2068c41-7efe-4db9-9314-a5647cd541a9 00:08:20.061 Superblock backups stored on blocks: 00:08:20.061 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:20.061 00:08:20.061 Allocating group tables: 0/64 done 00:08:20.061 Writing inode tables: 0/64 done 00:08:20.320 Creating journal (8192 blocks): done 00:08:21.167 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:08:21.167 00:08:21.167 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:21.167 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.425 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1054378 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.426 00:08:21.426 real 0m1.550s 00:08:21.426 user 0m0.018s 00:08:21.426 sys 0m0.049s 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:21.426 ************************************ 00:08:21.426 END TEST filesystem_in_capsule_ext4 00:08:21.426 ************************************ 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.426 ************************************ 00:08:21.426 START TEST filesystem_in_capsule_btrfs 00:08:21.426 ************************************ 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:21.426 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:21.684 btrfs-progs v6.6.2 00:08:21.684 See https://btrfs.readthedocs.io for more information. 00:08:21.684 00:08:21.684 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:21.684 NOTE: several default settings have changed in version 5.15, please make sure 00:08:21.684 this does not affect your deployments: 00:08:21.684 - DUP for metadata (-m dup) 00:08:21.684 - enabled no-holes (-O no-holes) 00:08:21.684 - enabled free-space-tree (-R free-space-tree) 00:08:21.684 00:08:21.684 Label: (null) 00:08:21.684 UUID: a53bf8c0-1ca2-4cdc-ac12-07df92dbe789 00:08:21.684 Node size: 16384 00:08:21.684 Sector size: 4096 00:08:21.684 Filesystem size: 510.00MiB 00:08:21.684 Block group profiles: 00:08:21.684 Data: single 8.00MiB 00:08:21.684 Metadata: DUP 32.00MiB 00:08:21.684 System: DUP 8.00MiB 00:08:21.684 SSD detected: yes 00:08:21.684 Zoned device: no 00:08:21.684 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:21.684 Runtime features: free-space-tree 00:08:21.684 Checksum: crc32c 00:08:21.684 Number of devices: 1 00:08:21.684 Devices: 00:08:21.684 ID SIZE PATH 00:08:21.685 1 510.00MiB /dev/nvme0n1p1 00:08:21.685 00:08:21.685 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:21.685 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1054378 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.251 00:08:22.251 real 0m0.583s 00:08:22.251 user 0m0.024s 00:08:22.251 sys 0m0.110s 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:22.251 ************************************ 00:08:22.251 END TEST filesystem_in_capsule_btrfs 00:08:22.251 ************************************ 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.251 ************************************ 00:08:22.251 START TEST filesystem_in_capsule_xfs 00:08:22.251 ************************************ 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:22.251 15:49:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:22.251 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:22.251 = sectsz=512 attr=2, projid32bit=1 00:08:22.251 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:22.251 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:22.251 data = bsize=4096 blocks=130560, imaxpct=25 00:08:22.251 = sunit=0 swidth=0 blks 00:08:22.251 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:22.251 log =internal log bsize=4096 blocks=16384, version=2 00:08:22.251 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:22.251 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:23.184 Discarding blocks...Done. 00:08:23.184 15:49:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:23.184 15:49:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1054378 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.719 00:08:25.719 real 0m3.097s 00:08:25.719 user 0m0.012s 00:08:25.719 sys 0m0.064s 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:25.719 ************************************ 00:08:25.719 END TEST filesystem_in_capsule_xfs 00:08:25.719 ************************************ 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:25.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1054378 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1054378 ']' 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1054378 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1054378 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1054378' 00:08:25.719 killing process with pid 1054378 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1054378 00:08:25.719 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1054378 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:26.006 00:08:26.006 real 0m11.957s 00:08:26.006 user 0m45.847s 00:08:26.006 sys 0m1.817s 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.006 ************************************ 00:08:26.006 END TEST nvmf_filesystem_in_capsule 00:08:26.006 ************************************ 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.006 rmmod nvme_tcp 00:08:26.006 rmmod nvme_fabrics 00:08:26.006 rmmod nvme_keyring 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.006 15:49:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.533 15:49:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:28.533 00:08:28.533 real 0m28.696s 00:08:28.533 user 1m33.298s 00:08:28.533 sys 0m5.353s 00:08:28.533 15:49:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.533 15:49:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.533 ************************************ 00:08:28.533 END TEST nvmf_filesystem 00:08:28.533 ************************************ 00:08:28.533 15:49:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:28.534 15:49:54 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:28.534 15:49:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:28.534 15:49:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.534 15:49:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.534 ************************************ 00:08:28.534 START TEST nvmf_target_discovery 00:08:28.534 ************************************ 00:08:28.534 15:49:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:28.534 * Looking for test storage... 00:08:28.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:28.534 15:49:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:30.434 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:30.434 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:30.434 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:30.434 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.434 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:08:30.434 00:08:30.434 --- 10.0.0.2 ping statistics --- 00:08:30.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.435 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:08:30.435 00:08:30.435 --- 10.0.0.1 ping statistics --- 00:08:30.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.435 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1057973 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1057973 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1057973 ']' 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.435 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.435 [2024-07-15 15:49:57.310006] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:30.435 [2024-07-15 15:49:57.310097] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.435 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.693 [2024-07-15 15:49:57.376196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.693 [2024-07-15 15:49:57.488295] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.693 [2024-07-15 15:49:57.488351] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.693 [2024-07-15 15:49:57.488380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.693 [2024-07-15 15:49:57.488391] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.693 [2024-07-15 15:49:57.488401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.693 [2024-07-15 15:49:57.488533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.693 [2024-07-15 15:49:57.488600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.693 [2024-07-15 15:49:57.488664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.693 [2024-07-15 15:49:57.488667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.693 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.693 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:30.693 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.693 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:30.693 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 [2024-07-15 15:49:57.649813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 Null1 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 [2024-07-15 15:49:57.690140] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 Null2 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 Null3 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 Null4 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.952 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.953 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:31.211 00:08:31.211 Discovery Log Number of Records 6, Generation counter 6 00:08:31.211 =====Discovery Log Entry 0====== 00:08:31.211 trtype: tcp 00:08:31.211 adrfam: ipv4 00:08:31.211 subtype: current discovery subsystem 00:08:31.211 treq: not required 00:08:31.211 portid: 0 00:08:31.211 trsvcid: 4420 00:08:31.211 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:31.211 traddr: 10.0.0.2 00:08:31.211 eflags: explicit discovery connections, duplicate discovery information 00:08:31.211 sectype: none 00:08:31.211 =====Discovery Log Entry 1====== 00:08:31.211 trtype: tcp 00:08:31.211 adrfam: ipv4 00:08:31.211 subtype: nvme subsystem 00:08:31.211 treq: not required 00:08:31.211 portid: 0 00:08:31.211 trsvcid: 4420 00:08:31.211 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:31.211 traddr: 10.0.0.2 00:08:31.211 eflags: none 00:08:31.211 sectype: none 00:08:31.211 =====Discovery Log Entry 2====== 00:08:31.211 trtype: tcp 00:08:31.211 adrfam: ipv4 00:08:31.211 subtype: nvme subsystem 00:08:31.211 treq: not required 00:08:31.211 portid: 0 00:08:31.211 trsvcid: 4420 00:08:31.211 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:31.211 traddr: 10.0.0.2 00:08:31.211 eflags: none 00:08:31.211 sectype: none 00:08:31.211 =====Discovery Log Entry 3====== 00:08:31.211 trtype: tcp 00:08:31.211 adrfam: ipv4 00:08:31.211 subtype: nvme subsystem 00:08:31.211 treq: not required 00:08:31.211 portid: 0 00:08:31.211 trsvcid: 4420 00:08:31.211 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:31.211 traddr: 10.0.0.2 00:08:31.211 eflags: none 00:08:31.211 sectype: none 00:08:31.211 =====Discovery Log Entry 4====== 00:08:31.211 trtype: tcp 00:08:31.211 adrfam: ipv4 00:08:31.211 subtype: nvme subsystem 00:08:31.211 treq: not required 00:08:31.211 portid: 0 00:08:31.211 trsvcid: 4420 00:08:31.211 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:31.211 traddr: 10.0.0.2 00:08:31.211 eflags: none 00:08:31.211 sectype: none 00:08:31.211 =====Discovery Log Entry 5====== 00:08:31.211 trtype: tcp 00:08:31.211 adrfam: ipv4 00:08:31.211 subtype: discovery subsystem referral 00:08:31.211 treq: not required 00:08:31.211 portid: 0 00:08:31.211 trsvcid: 4430 00:08:31.211 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:31.211 traddr: 10.0.0.2 00:08:31.211 eflags: none 00:08:31.211 sectype: none 00:08:31.211 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:31.211 Perform nvmf subsystem discovery via RPC 00:08:31.211 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:31.211 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.211 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.211 [ 00:08:31.211 { 00:08:31.211 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:31.211 "subtype": "Discovery", 00:08:31.211 "listen_addresses": [ 00:08:31.211 { 00:08:31.211 "trtype": "TCP", 00:08:31.211 "adrfam": "IPv4", 00:08:31.211 "traddr": "10.0.0.2", 00:08:31.211 "trsvcid": "4420" 00:08:31.211 } 00:08:31.211 ], 00:08:31.211 "allow_any_host": true, 00:08:31.211 "hosts": [] 00:08:31.211 }, 00:08:31.211 { 00:08:31.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.211 "subtype": "NVMe", 00:08:31.211 "listen_addresses": [ 00:08:31.211 { 00:08:31.211 "trtype": "TCP", 00:08:31.211 "adrfam": "IPv4", 00:08:31.211 "traddr": "10.0.0.2", 00:08:31.211 "trsvcid": "4420" 00:08:31.211 } 00:08:31.211 ], 00:08:31.211 "allow_any_host": true, 00:08:31.211 "hosts": [], 00:08:31.211 "serial_number": "SPDK00000000000001", 00:08:31.211 "model_number": "SPDK bdev Controller", 00:08:31.211 "max_namespaces": 32, 00:08:31.211 "min_cntlid": 1, 00:08:31.211 "max_cntlid": 65519, 00:08:31.211 "namespaces": [ 00:08:31.211 { 00:08:31.211 "nsid": 1, 00:08:31.211 "bdev_name": "Null1", 00:08:31.211 "name": "Null1", 00:08:31.211 "nguid": "8C594D0473CC48AFA718210DF3B9B1A3", 00:08:31.211 "uuid": "8c594d04-73cc-48af-a718-210df3b9b1a3" 00:08:31.211 } 00:08:31.211 ] 00:08:31.211 }, 00:08:31.211 { 00:08:31.211 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:31.211 "subtype": "NVMe", 00:08:31.211 "listen_addresses": [ 00:08:31.211 { 00:08:31.211 "trtype": "TCP", 00:08:31.211 "adrfam": "IPv4", 00:08:31.211 "traddr": "10.0.0.2", 00:08:31.211 "trsvcid": "4420" 00:08:31.211 } 00:08:31.211 ], 00:08:31.211 "allow_any_host": true, 00:08:31.211 "hosts": [], 00:08:31.211 "serial_number": "SPDK00000000000002", 00:08:31.211 "model_number": "SPDK bdev Controller", 00:08:31.211 "max_namespaces": 32, 00:08:31.211 "min_cntlid": 1, 00:08:31.211 "max_cntlid": 65519, 00:08:31.211 "namespaces": [ 00:08:31.211 { 00:08:31.211 "nsid": 1, 00:08:31.211 "bdev_name": "Null2", 00:08:31.211 "name": "Null2", 00:08:31.211 "nguid": "678D3C6370E5466EBB4C0B90B9DA9005", 00:08:31.211 "uuid": "678d3c63-70e5-466e-bb4c-0b90b9da9005" 00:08:31.211 } 00:08:31.211 ] 00:08:31.211 }, 00:08:31.211 { 00:08:31.211 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:31.211 "subtype": "NVMe", 00:08:31.211 "listen_addresses": [ 00:08:31.211 { 00:08:31.211 "trtype": "TCP", 00:08:31.211 "adrfam": "IPv4", 00:08:31.211 "traddr": "10.0.0.2", 00:08:31.211 "trsvcid": "4420" 00:08:31.211 } 00:08:31.211 ], 00:08:31.211 "allow_any_host": true, 00:08:31.211 "hosts": [], 00:08:31.211 "serial_number": "SPDK00000000000003", 00:08:31.211 "model_number": "SPDK bdev Controller", 00:08:31.211 "max_namespaces": 32, 00:08:31.211 "min_cntlid": 1, 00:08:31.211 "max_cntlid": 65519, 00:08:31.211 "namespaces": [ 00:08:31.211 { 00:08:31.211 "nsid": 1, 00:08:31.211 "bdev_name": "Null3", 00:08:31.211 "name": "Null3", 00:08:31.211 "nguid": "C55DA0A6667441018E969CC140E916C1", 00:08:31.211 "uuid": "c55da0a6-6674-4101-8e96-9cc140e916c1" 00:08:31.211 } 00:08:31.211 ] 00:08:31.211 }, 00:08:31.211 { 00:08:31.211 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:31.211 "subtype": "NVMe", 00:08:31.211 "listen_addresses": [ 00:08:31.211 { 00:08:31.211 "trtype": "TCP", 00:08:31.211 "adrfam": "IPv4", 00:08:31.211 "traddr": "10.0.0.2", 00:08:31.211 "trsvcid": "4420" 00:08:31.211 } 00:08:31.211 ], 00:08:31.211 "allow_any_host": true, 00:08:31.211 "hosts": [], 00:08:31.211 "serial_number": "SPDK00000000000004", 00:08:31.211 "model_number": "SPDK bdev Controller", 00:08:31.211 "max_namespaces": 32, 00:08:31.211 "min_cntlid": 1, 00:08:31.211 "max_cntlid": 65519, 00:08:31.211 "namespaces": [ 00:08:31.211 { 00:08:31.211 "nsid": 1, 00:08:31.211 "bdev_name": "Null4", 00:08:31.212 "name": "Null4", 00:08:31.212 "nguid": "56E35A4A4A7247E78349E17FF9A1DEBD", 00:08:31.212 "uuid": "56e35a4a-4a72-47e7-8349-e17ff9a1debd" 00:08:31.212 } 00:08:31.212 ] 00:08:31.212 } 00:08:31.212 ] 00:08:31.212 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.212 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:31.212 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.212 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.212 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.212 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.212 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.212 15:49:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:31.212 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.212 15:49:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.212 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.212 rmmod nvme_tcp 00:08:31.212 rmmod nvme_fabrics 00:08:31.212 rmmod nvme_keyring 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1057973 ']' 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1057973 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1057973 ']' 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1057973 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1057973 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1057973' 00:08:31.470 killing process with pid 1057973 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1057973 00:08:31.470 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1057973 00:08:31.730 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.730 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.730 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.730 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.730 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.730 15:49:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.730 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.730 15:49:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.660 15:50:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:33.660 00:08:33.660 real 0m5.519s 00:08:33.660 user 0m4.538s 00:08:33.660 sys 0m1.855s 00:08:33.660 15:50:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.660 15:50:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.660 ************************************ 00:08:33.660 END TEST nvmf_target_discovery 00:08:33.660 ************************************ 00:08:33.660 15:50:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:33.660 15:50:00 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:33.660 15:50:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:33.660 15:50:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.660 15:50:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:33.660 ************************************ 00:08:33.660 START TEST nvmf_referrals 00:08:33.660 ************************************ 00:08:33.660 15:50:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:33.920 * Looking for test storage... 00:08:33.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.920 15:50:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:35.825 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:35.825 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:35.825 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:35.825 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:35.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:08:35.825 00:08:35.825 --- 10.0.0.2 ping statistics --- 00:08:35.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.825 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:08:35.825 00:08:35.825 --- 10.0.0.1 ping statistics --- 00:08:35.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.825 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.825 15:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.083 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1059958 00:08:36.083 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1059958 00:08:36.083 15:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.083 15:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1059958 ']' 00:08:36.083 15:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.083 15:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.083 15:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.083 15:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.083 15:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.083 [2024-07-15 15:50:02.803842] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:36.083 [2024-07-15 15:50:02.803937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.083 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.083 [2024-07-15 15:50:02.868747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.083 [2024-07-15 15:50:02.978120] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.083 [2024-07-15 15:50:02.978213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.083 [2024-07-15 15:50:02.978233] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.083 [2024-07-15 15:50:02.978250] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.083 [2024-07-15 15:50:02.978263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.083 [2024-07-15 15:50:02.978405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.083 [2024-07-15 15:50:02.978477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.083 [2024-07-15 15:50:02.978528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.083 [2024-07-15 15:50:02.978535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.341 [2024-07-15 15:50:03.146933] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.341 [2024-07-15 15:50:03.159193] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.341 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:36.342 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.598 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.855 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:37.112 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:37.112 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:37.112 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:37.112 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:37.113 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.113 15:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.370 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.628 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:37.886 rmmod nvme_tcp 00:08:37.886 rmmod nvme_fabrics 00:08:37.886 rmmod nvme_keyring 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1059958 ']' 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1059958 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1059958 ']' 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1059958 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1059958 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1059958' 00:08:37.886 killing process with pid 1059958 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1059958 00:08:37.886 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1059958 00:08:38.145 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.145 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.145 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.145 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.145 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.145 15:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.145 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.145 15:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.046 15:50:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:40.046 00:08:40.046 real 0m6.404s 00:08:40.046 user 0m8.962s 00:08:40.046 sys 0m2.075s 00:08:40.046 15:50:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.046 15:50:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.046 ************************************ 00:08:40.046 END TEST nvmf_referrals 00:08:40.046 ************************************ 00:08:40.305 15:50:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:40.305 15:50:06 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:40.305 15:50:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:40.305 15:50:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.305 15:50:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:40.305 ************************************ 00:08:40.305 START TEST nvmf_connect_disconnect 00:08:40.305 ************************************ 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:40.305 * Looking for test storage... 00:08:40.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.305 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:40.306 15:50:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:42.206 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:42.206 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:42.206 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:42.206 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:42.206 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:42.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:08:42.465 00:08:42.465 --- 10.0.0.2 ping statistics --- 00:08:42.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.465 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:08:42.465 00:08:42.465 --- 10.0.0.1 ping statistics --- 00:08:42.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.465 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1062254 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1062254 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1062254 ']' 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.465 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.465 [2024-07-15 15:50:09.266040] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:42.465 [2024-07-15 15:50:09.266130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.465 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.465 [2024-07-15 15:50:09.331272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.723 [2024-07-15 15:50:09.442757] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.723 [2024-07-15 15:50:09.442816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.723 [2024-07-15 15:50:09.442837] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.723 [2024-07-15 15:50:09.442854] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.723 [2024-07-15 15:50:09.442868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.723 [2024-07-15 15:50:09.442988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.723 [2024-07-15 15:50:09.443074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.723 [2024-07-15 15:50:09.443141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.723 [2024-07-15 15:50:09.443148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.723 [2024-07-15 15:50:09.600759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.723 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.723 [2024-07-15 15:50:09.652632] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.980 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.980 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:42.980 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:42.980 15:50:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:45.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.154 rmmod nvme_tcp 00:08:57.154 rmmod nvme_fabrics 00:08:57.154 rmmod nvme_keyring 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1062254 ']' 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1062254 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1062254 ']' 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1062254 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1062254 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1062254' 00:08:57.154 killing process with pid 1062254 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1062254 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1062254 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.154 15:50:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.061 15:50:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:59.061 00:08:59.061 real 0m18.861s 00:08:59.061 user 0m56.804s 00:08:59.061 sys 0m3.265s 00:08:59.061 15:50:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.061 15:50:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:59.061 ************************************ 00:08:59.061 END TEST nvmf_connect_disconnect 00:08:59.061 ************************************ 00:08:59.061 15:50:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:59.061 15:50:25 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:59.061 15:50:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:59.061 15:50:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.061 15:50:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:59.061 ************************************ 00:08:59.061 START TEST nvmf_multitarget 00:08:59.061 ************************************ 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:59.061 * Looking for test storage... 00:08:59.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.061 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:59.062 15:50:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:01.600 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:01.600 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:01.600 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:01.601 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:01.601 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:01.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:09:01.601 00:09:01.601 --- 10.0.0.2 ping statistics --- 00:09:01.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.601 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:09:01.601 00:09:01.601 --- 10.0.0.1 ping statistics --- 00:09:01.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.601 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1066020 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1066020 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1066020 ']' 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.601 15:50:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:01.601 [2024-07-15 15:50:28.248649] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:01.601 [2024-07-15 15:50:28.248730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.601 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.601 [2024-07-15 15:50:28.314770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.601 [2024-07-15 15:50:28.421473] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.602 [2024-07-15 15:50:28.421522] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.602 [2024-07-15 15:50:28.421543] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.602 [2024-07-15 15:50:28.421559] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.602 [2024-07-15 15:50:28.421573] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.602 [2024-07-15 15:50:28.421662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.602 [2024-07-15 15:50:28.421692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.602 [2024-07-15 15:50:28.421752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.602 [2024-07-15 15:50:28.421759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.534 15:50:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.534 15:50:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:02.534 15:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.534 15:50:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.534 15:50:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:02.534 15:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.534 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:02.534 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:02.534 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:02.534 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:02.534 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:02.792 "nvmf_tgt_1" 00:09:02.792 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:02.792 "nvmf_tgt_2" 00:09:02.792 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:02.792 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:02.792 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:02.792 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:03.049 true 00:09:03.049 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:03.049 true 00:09:03.049 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:03.049 15:50:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.308 rmmod nvme_tcp 00:09:03.308 rmmod nvme_fabrics 00:09:03.308 rmmod nvme_keyring 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1066020 ']' 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1066020 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1066020 ']' 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1066020 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1066020 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1066020' 00:09:03.308 killing process with pid 1066020 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1066020 00:09:03.308 15:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1066020 00:09:03.567 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:03.567 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:03.567 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:03.567 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.567 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.567 15:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.567 15:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.567 15:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.102 15:50:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:06.102 00:09:06.102 real 0m6.530s 00:09:06.102 user 0m9.423s 00:09:06.102 sys 0m2.039s 00:09:06.102 15:50:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.102 15:50:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:06.102 ************************************ 00:09:06.102 END TEST nvmf_multitarget 00:09:06.102 ************************************ 00:09:06.102 15:50:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:06.102 15:50:32 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:06.102 15:50:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:06.102 15:50:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.102 15:50:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.102 ************************************ 00:09:06.102 START TEST nvmf_rpc 00:09:06.102 ************************************ 00:09:06.102 15:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:06.102 * Looking for test storage... 00:09:06.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.102 15:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.102 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:06.102 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.102 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.102 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.102 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.102 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:06.103 15:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:08.001 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:08.001 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:08.001 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:08.001 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:09:08.001 00:09:08.001 --- 10.0.0.2 ping statistics --- 00:09:08.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.001 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:09:08.001 00:09:08.001 --- 10.0.0.1 ping statistics --- 00:09:08.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.001 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:08.001 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1068137 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1068137 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1068137 ']' 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.002 15:50:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.002 [2024-07-15 15:50:34.787668] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:08.002 [2024-07-15 15:50:34.787765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.002 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.002 [2024-07-15 15:50:34.858827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.259 [2024-07-15 15:50:34.980403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.259 [2024-07-15 15:50:34.980492] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.259 [2024-07-15 15:50:34.980528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.259 [2024-07-15 15:50:34.980548] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.259 [2024-07-15 15:50:34.980566] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.259 [2024-07-15 15:50:34.983910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.259 [2024-07-15 15:50:34.983959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.259 [2024-07-15 15:50:34.984024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.259 [2024-07-15 15:50:34.984028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:08.259 "tick_rate": 2700000000, 00:09:08.259 "poll_groups": [ 00:09:08.259 { 00:09:08.259 "name": "nvmf_tgt_poll_group_000", 00:09:08.259 "admin_qpairs": 0, 00:09:08.259 "io_qpairs": 0, 00:09:08.259 "current_admin_qpairs": 0, 00:09:08.259 "current_io_qpairs": 0, 00:09:08.259 "pending_bdev_io": 0, 00:09:08.259 "completed_nvme_io": 0, 00:09:08.259 "transports": [] 00:09:08.259 }, 00:09:08.259 { 00:09:08.259 "name": "nvmf_tgt_poll_group_001", 00:09:08.259 "admin_qpairs": 0, 00:09:08.259 "io_qpairs": 0, 00:09:08.259 "current_admin_qpairs": 0, 00:09:08.259 "current_io_qpairs": 0, 00:09:08.259 "pending_bdev_io": 0, 00:09:08.259 "completed_nvme_io": 0, 00:09:08.259 "transports": [] 00:09:08.259 }, 00:09:08.259 { 00:09:08.259 "name": "nvmf_tgt_poll_group_002", 00:09:08.259 "admin_qpairs": 0, 00:09:08.259 "io_qpairs": 0, 00:09:08.259 "current_admin_qpairs": 0, 00:09:08.259 "current_io_qpairs": 0, 00:09:08.259 "pending_bdev_io": 0, 00:09:08.259 "completed_nvme_io": 0, 00:09:08.259 "transports": [] 00:09:08.259 }, 00:09:08.259 { 00:09:08.259 "name": "nvmf_tgt_poll_group_003", 00:09:08.259 "admin_qpairs": 0, 00:09:08.259 "io_qpairs": 0, 00:09:08.259 "current_admin_qpairs": 0, 00:09:08.259 "current_io_qpairs": 0, 00:09:08.259 "pending_bdev_io": 0, 00:09:08.259 "completed_nvme_io": 0, 00:09:08.259 "transports": [] 00:09:08.259 } 00:09:08.259 ] 00:09:08.259 }' 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:08.259 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:08.515 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:08.515 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:08.515 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:08.515 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.515 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.515 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.515 [2024-07-15 15:50:35.243244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.515 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.515 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:08.515 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:08.516 "tick_rate": 2700000000, 00:09:08.516 "poll_groups": [ 00:09:08.516 { 00:09:08.516 "name": "nvmf_tgt_poll_group_000", 00:09:08.516 "admin_qpairs": 0, 00:09:08.516 "io_qpairs": 0, 00:09:08.516 "current_admin_qpairs": 0, 00:09:08.516 "current_io_qpairs": 0, 00:09:08.516 "pending_bdev_io": 0, 00:09:08.516 "completed_nvme_io": 0, 00:09:08.516 "transports": [ 00:09:08.516 { 00:09:08.516 "trtype": "TCP" 00:09:08.516 } 00:09:08.516 ] 00:09:08.516 }, 00:09:08.516 { 00:09:08.516 "name": "nvmf_tgt_poll_group_001", 00:09:08.516 "admin_qpairs": 0, 00:09:08.516 "io_qpairs": 0, 00:09:08.516 "current_admin_qpairs": 0, 00:09:08.516 "current_io_qpairs": 0, 00:09:08.516 "pending_bdev_io": 0, 00:09:08.516 "completed_nvme_io": 0, 00:09:08.516 "transports": [ 00:09:08.516 { 00:09:08.516 "trtype": "TCP" 00:09:08.516 } 00:09:08.516 ] 00:09:08.516 }, 00:09:08.516 { 00:09:08.516 "name": "nvmf_tgt_poll_group_002", 00:09:08.516 "admin_qpairs": 0, 00:09:08.516 "io_qpairs": 0, 00:09:08.516 "current_admin_qpairs": 0, 00:09:08.516 "current_io_qpairs": 0, 00:09:08.516 "pending_bdev_io": 0, 00:09:08.516 "completed_nvme_io": 0, 00:09:08.516 "transports": [ 00:09:08.516 { 00:09:08.516 "trtype": "TCP" 00:09:08.516 } 00:09:08.516 ] 00:09:08.516 }, 00:09:08.516 { 00:09:08.516 "name": "nvmf_tgt_poll_group_003", 00:09:08.516 "admin_qpairs": 0, 00:09:08.516 "io_qpairs": 0, 00:09:08.516 "current_admin_qpairs": 0, 00:09:08.516 "current_io_qpairs": 0, 00:09:08.516 "pending_bdev_io": 0, 00:09:08.516 "completed_nvme_io": 0, 00:09:08.516 "transports": [ 00:09:08.516 { 00:09:08.516 "trtype": "TCP" 00:09:08.516 } 00:09:08.516 ] 00:09:08.516 } 00:09:08.516 ] 00:09:08.516 }' 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.516 Malloc1 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.516 [2024-07-15 15:50:35.404753] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:08.516 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:09:08.516 [2024-07-15 15:50:35.427321] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:09:08.772 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:08.772 could not add new controller: failed to write to nvme-fabrics device 00:09:08.772 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:08.772 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:08.772 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:08.772 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:08.772 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:08.772 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.772 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.772 15:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.772 15:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.337 15:50:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.337 15:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:09.337 15:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.337 15:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:09.337 15:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:11.237 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:11.237 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:11.237 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.237 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:11.237 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.237 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:11.237 15:50:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:11.495 [2024-07-15 15:50:38.229086] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:09:11.495 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:11.495 could not add new controller: failed to write to nvme-fabrics device 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.495 15:50:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:12.098 15:50:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:12.098 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:12.098 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.098 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:12.098 15:50:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:14.626 15:50:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:14.626 15:50:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:14.626 15:50:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.626 15:50:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:14.626 15:50:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.626 15:50:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:14.626 15:50:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.626 [2024-07-15 15:50:41.056041] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.626 15:50:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.883 15:50:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.883 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:14.883 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.883 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:14.883 15:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:16.780 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:16.780 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:16.780 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.781 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:16.781 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.781 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:16.781 15:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.038 [2024-07-15 15:50:43.845506] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.038 15:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.968 15:50:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.968 15:50:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:17.968 15:50:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.968 15:50:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:17.968 15:50:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.880 [2024-07-15 15:50:46.648549] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.880 15:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:20.446 15:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:20.446 15:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:20.446 15:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.446 15:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:20.446 15:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.967 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.968 [2024-07-15 15:50:49.425010] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.968 15:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.225 15:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.225 15:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:23.225 15:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.225 15:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:23.225 15:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:25.751 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:25.751 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:25.751 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.751 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.752 [2024-07-15 15:50:52.179368] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.752 15:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:26.032 15:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:26.032 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:26.032 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.032 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:26.032 15:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:27.924 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:27.924 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:27.924 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.924 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:27.924 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.924 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:27.924 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.182 [2024-07-15 15:50:54.963084] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.182 15:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.182 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 [2024-07-15 15:50:55.011131] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 [2024-07-15 15:50:55.059328] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 [2024-07-15 15:50:55.107472] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.183 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.440 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.441 [2024-07-15 15:50:55.155638] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:28.441 "tick_rate": 2700000000, 00:09:28.441 "poll_groups": [ 00:09:28.441 { 00:09:28.441 "name": "nvmf_tgt_poll_group_000", 00:09:28.441 "admin_qpairs": 2, 00:09:28.441 "io_qpairs": 84, 00:09:28.441 "current_admin_qpairs": 0, 00:09:28.441 "current_io_qpairs": 0, 00:09:28.441 "pending_bdev_io": 0, 00:09:28.441 "completed_nvme_io": 85, 00:09:28.441 "transports": [ 00:09:28.441 { 00:09:28.441 "trtype": "TCP" 00:09:28.441 } 00:09:28.441 ] 00:09:28.441 }, 00:09:28.441 { 00:09:28.441 "name": "nvmf_tgt_poll_group_001", 00:09:28.441 "admin_qpairs": 2, 00:09:28.441 "io_qpairs": 84, 00:09:28.441 "current_admin_qpairs": 0, 00:09:28.441 "current_io_qpairs": 0, 00:09:28.441 "pending_bdev_io": 0, 00:09:28.441 "completed_nvme_io": 331, 00:09:28.441 "transports": [ 00:09:28.441 { 00:09:28.441 "trtype": "TCP" 00:09:28.441 } 00:09:28.441 ] 00:09:28.441 }, 00:09:28.441 { 00:09:28.441 "name": "nvmf_tgt_poll_group_002", 00:09:28.441 "admin_qpairs": 1, 00:09:28.441 "io_qpairs": 84, 00:09:28.441 "current_admin_qpairs": 0, 00:09:28.441 "current_io_qpairs": 0, 00:09:28.441 "pending_bdev_io": 0, 00:09:28.441 "completed_nvme_io": 136, 00:09:28.441 "transports": [ 00:09:28.441 { 00:09:28.441 "trtype": "TCP" 00:09:28.441 } 00:09:28.441 ] 00:09:28.441 }, 00:09:28.441 { 00:09:28.441 "name": "nvmf_tgt_poll_group_003", 00:09:28.441 "admin_qpairs": 2, 00:09:28.441 "io_qpairs": 84, 00:09:28.441 "current_admin_qpairs": 0, 00:09:28.441 "current_io_qpairs": 0, 00:09:28.441 "pending_bdev_io": 0, 00:09:28.441 "completed_nvme_io": 134, 00:09:28.441 "transports": [ 00:09:28.441 { 00:09:28.441 "trtype": "TCP" 00:09:28.441 } 00:09:28.441 ] 00:09:28.441 } 00:09:28.441 ] 00:09:28.441 }' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.441 rmmod nvme_tcp 00:09:28.441 rmmod nvme_fabrics 00:09:28.441 rmmod nvme_keyring 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1068137 ']' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1068137 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1068137 ']' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1068137 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.441 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1068137 00:09:28.699 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:28.699 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:28.699 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1068137' 00:09:28.699 killing process with pid 1068137 00:09:28.699 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1068137 00:09:28.699 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1068137 00:09:28.958 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.958 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.958 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.958 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.958 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.958 15:50:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.958 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.958 15:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.869 15:50:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.869 00:09:30.869 real 0m25.231s 00:09:30.869 user 1m21.687s 00:09:30.869 sys 0m4.225s 00:09:30.869 15:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.869 15:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.869 ************************************ 00:09:30.869 END TEST nvmf_rpc 00:09:30.869 ************************************ 00:09:30.869 15:50:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:30.869 15:50:57 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:30.869 15:50:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:30.869 15:50:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.869 15:50:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:30.869 ************************************ 00:09:30.869 START TEST nvmf_invalid 00:09:30.869 ************************************ 00:09:30.869 15:50:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:31.131 * Looking for test storage... 00:09:31.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:31.131 15:50:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:33.057 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:33.057 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.057 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:33.057 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:33.058 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:33.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:09:33.058 00:09:33.058 --- 10.0.0.2 ping statistics --- 00:09:33.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.058 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:09:33.058 00:09:33.058 --- 10.0.0.1 ping statistics --- 00:09:33.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.058 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1072625 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1072625 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1072625 ']' 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.058 15:50:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:33.316 [2024-07-15 15:51:00.012036] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:33.316 [2024-07-15 15:51:00.012127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.316 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.316 [2024-07-15 15:51:00.088142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.316 [2024-07-15 15:51:00.208984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.316 [2024-07-15 15:51:00.209051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.316 [2024-07-15 15:51:00.209076] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.316 [2024-07-15 15:51:00.209087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.316 [2024-07-15 15:51:00.209097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.316 [2024-07-15 15:51:00.209148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.316 [2024-07-15 15:51:00.210898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.316 [2024-07-15 15:51:00.213997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.316 [2024-07-15 15:51:00.214004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.574 15:51:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.574 15:51:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:33.574 15:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.574 15:51:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:33.574 15:51:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:33.574 15:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.574 15:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:33.574 15:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28048 00:09:33.831 [2024-07-15 15:51:00.611591] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:33.831 15:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:33.831 { 00:09:33.831 "nqn": "nqn.2016-06.io.spdk:cnode28048", 00:09:33.831 "tgt_name": "foobar", 00:09:33.831 "method": "nvmf_create_subsystem", 00:09:33.831 "req_id": 1 00:09:33.831 } 00:09:33.831 Got JSON-RPC error response 00:09:33.831 response: 00:09:33.831 { 00:09:33.831 "code": -32603, 00:09:33.832 "message": "Unable to find target foobar" 00:09:33.832 }' 00:09:33.832 15:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:33.832 { 00:09:33.832 "nqn": "nqn.2016-06.io.spdk:cnode28048", 00:09:33.832 "tgt_name": "foobar", 00:09:33.832 "method": "nvmf_create_subsystem", 00:09:33.832 "req_id": 1 00:09:33.832 } 00:09:33.832 Got JSON-RPC error response 00:09:33.832 response: 00:09:33.832 { 00:09:33.832 "code": -32603, 00:09:33.832 "message": "Unable to find target foobar" 00:09:33.832 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:33.832 15:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:33.832 15:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5125 00:09:34.089 [2024-07-15 15:51:00.872478] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5125: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:34.089 15:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:34.089 { 00:09:34.089 "nqn": "nqn.2016-06.io.spdk:cnode5125", 00:09:34.089 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:34.089 "method": "nvmf_create_subsystem", 00:09:34.089 "req_id": 1 00:09:34.089 } 00:09:34.089 Got JSON-RPC error response 00:09:34.089 response: 00:09:34.089 { 00:09:34.089 "code": -32602, 00:09:34.089 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:34.089 }' 00:09:34.089 15:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:34.089 { 00:09:34.089 "nqn": "nqn.2016-06.io.spdk:cnode5125", 00:09:34.089 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:34.089 "method": "nvmf_create_subsystem", 00:09:34.089 "req_id": 1 00:09:34.089 } 00:09:34.089 Got JSON-RPC error response 00:09:34.089 response: 00:09:34.089 { 00:09:34.089 "code": -32602, 00:09:34.089 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:34.089 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:34.089 15:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:34.089 15:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31187 00:09:34.347 [2024-07-15 15:51:01.137326] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31187: invalid model number 'SPDK_Controller' 00:09:34.347 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:34.347 { 00:09:34.347 "nqn": "nqn.2016-06.io.spdk:cnode31187", 00:09:34.347 "model_number": "SPDK_Controller\u001f", 00:09:34.347 "method": "nvmf_create_subsystem", 00:09:34.347 "req_id": 1 00:09:34.347 } 00:09:34.347 Got JSON-RPC error response 00:09:34.347 response: 00:09:34.347 { 00:09:34.347 "code": -32602, 00:09:34.347 "message": "Invalid MN SPDK_Controller\u001f" 00:09:34.347 }' 00:09:34.347 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:34.347 { 00:09:34.347 "nqn": "nqn.2016-06.io.spdk:cnode31187", 00:09:34.347 "model_number": "SPDK_Controller\u001f", 00:09:34.347 "method": "nvmf_create_subsystem", 00:09:34.347 "req_id": 1 00:09:34.347 } 00:09:34.347 Got JSON-RPC error response 00:09:34.347 response: 00:09:34.347 { 00:09:34.347 "code": -32602, 00:09:34.347 "message": "Invalid MN SPDK_Controller\u001f" 00:09:34.347 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:34.347 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:34.347 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:34.347 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:34.347 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'a7vqB}4>%lZCOuv]VzH63' 00:09:34.348 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'a7vqB}4>%lZCOuv]VzH63' nqn.2016-06.io.spdk:cnode24731 00:09:34.607 [2024-07-15 15:51:01.454429] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24731: invalid serial number 'a7vqB}4>%lZCOuv]VzH63' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:34.607 { 00:09:34.607 "nqn": "nqn.2016-06.io.spdk:cnode24731", 00:09:34.607 "serial_number": "a7vqB}4>%lZCOuv]VzH63", 00:09:34.607 "method": "nvmf_create_subsystem", 00:09:34.607 "req_id": 1 00:09:34.607 } 00:09:34.607 Got JSON-RPC error response 00:09:34.607 response: 00:09:34.607 { 00:09:34.607 "code": -32602, 00:09:34.607 "message": "Invalid SN a7vqB}4>%lZCOuv]VzH63" 00:09:34.607 }' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:34.607 { 00:09:34.607 "nqn": "nqn.2016-06.io.spdk:cnode24731", 00:09:34.607 "serial_number": "a7vqB}4>%lZCOuv]VzH63", 00:09:34.607 "method": "nvmf_create_subsystem", 00:09:34.607 "req_id": 1 00:09:34.607 } 00:09:34.607 Got JSON-RPC error response 00:09:34.607 response: 00:09:34.607 { 00:09:34.607 "code": -32602, 00:09:34.607 "message": "Invalid SN a7vqB}4>%lZCOuv]VzH63" 00:09:34.607 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:34.607 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:34.608 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:34.608 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.608 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.608 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:34.608 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:34.608 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:34.608 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.608 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.608 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:34.608 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.865 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '9$T3SB'\''*"#fz=uO|73="hQ~Skx~V7c_\R$g+9t@*T' 00:09:34.866 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '9$T3SB'\''*"#fz=uO|73="hQ~Skx~V7c_\R$g+9t@*T' nqn.2016-06.io.spdk:cnode18487 00:09:35.123 [2024-07-15 15:51:01.823622] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18487: invalid model number '9$T3SB'*"#fz=uO|73="hQ~Skx~V7c_\R$g+9t@*T' 00:09:35.124 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:35.124 { 00:09:35.124 "nqn": "nqn.2016-06.io.spdk:cnode18487", 00:09:35.124 "model_number": "9$T3SB'\''*\"#fz=uO|73=\"hQ~Skx~V7c_\\R$g+9t@*T", 00:09:35.124 "method": "nvmf_create_subsystem", 00:09:35.124 "req_id": 1 00:09:35.124 } 00:09:35.124 Got JSON-RPC error response 00:09:35.124 response: 00:09:35.124 { 00:09:35.124 "code": -32602, 00:09:35.124 "message": "Invalid MN 9$T3SB'\''*\"#fz=uO|73=\"hQ~Skx~V7c_\\R$g+9t@*T" 00:09:35.124 }' 00:09:35.124 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:35.124 { 00:09:35.124 "nqn": "nqn.2016-06.io.spdk:cnode18487", 00:09:35.124 "model_number": "9$T3SB'*\"#fz=uO|73=\"hQ~Skx~V7c_\\R$g+9t@*T", 00:09:35.124 "method": "nvmf_create_subsystem", 00:09:35.124 "req_id": 1 00:09:35.124 } 00:09:35.124 Got JSON-RPC error response 00:09:35.124 response: 00:09:35.124 { 00:09:35.124 "code": -32602, 00:09:35.124 "message": "Invalid MN 9$T3SB'*\"#fz=uO|73=\"hQ~Skx~V7c_\\R$g+9t@*T" 00:09:35.124 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:35.124 15:51:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:35.382 [2024-07-15 15:51:02.076558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.382 15:51:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:35.639 15:51:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:35.639 15:51:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:35.639 15:51:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:35.639 15:51:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:35.639 15:51:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:35.896 [2024-07-15 15:51:02.598284] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:35.896 15:51:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:35.896 { 00:09:35.896 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:35.896 "listen_address": { 00:09:35.896 "trtype": "tcp", 00:09:35.896 "traddr": "", 00:09:35.896 "trsvcid": "4421" 00:09:35.896 }, 00:09:35.896 "method": "nvmf_subsystem_remove_listener", 00:09:35.896 "req_id": 1 00:09:35.896 } 00:09:35.896 Got JSON-RPC error response 00:09:35.896 response: 00:09:35.896 { 00:09:35.896 "code": -32602, 00:09:35.896 "message": "Invalid parameters" 00:09:35.896 }' 00:09:35.896 15:51:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:35.896 { 00:09:35.896 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:35.896 "listen_address": { 00:09:35.896 "trtype": "tcp", 00:09:35.896 "traddr": "", 00:09:35.896 "trsvcid": "4421" 00:09:35.896 }, 00:09:35.896 "method": "nvmf_subsystem_remove_listener", 00:09:35.896 "req_id": 1 00:09:35.896 } 00:09:35.896 Got JSON-RPC error response 00:09:35.896 response: 00:09:35.896 { 00:09:35.896 "code": -32602, 00:09:35.896 "message": "Invalid parameters" 00:09:35.896 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:35.896 15:51:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5345 -i 0 00:09:36.153 [2024-07-15 15:51:02.855059] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5345: invalid cntlid range [0-65519] 00:09:36.153 15:51:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:36.153 { 00:09:36.153 "nqn": "nqn.2016-06.io.spdk:cnode5345", 00:09:36.153 "min_cntlid": 0, 00:09:36.153 "method": "nvmf_create_subsystem", 00:09:36.153 "req_id": 1 00:09:36.153 } 00:09:36.153 Got JSON-RPC error response 00:09:36.153 response: 00:09:36.153 { 00:09:36.153 "code": -32602, 00:09:36.153 "message": "Invalid cntlid range [0-65519]" 00:09:36.153 }' 00:09:36.153 15:51:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:36.153 { 00:09:36.153 "nqn": "nqn.2016-06.io.spdk:cnode5345", 00:09:36.153 "min_cntlid": 0, 00:09:36.153 "method": "nvmf_create_subsystem", 00:09:36.153 "req_id": 1 00:09:36.153 } 00:09:36.153 Got JSON-RPC error response 00:09:36.153 response: 00:09:36.153 { 00:09:36.153 "code": -32602, 00:09:36.153 "message": "Invalid cntlid range [0-65519]" 00:09:36.153 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:36.153 15:51:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11518 -i 65520 00:09:36.411 [2024-07-15 15:51:03.107867] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11518: invalid cntlid range [65520-65519] 00:09:36.411 15:51:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:36.411 { 00:09:36.411 "nqn": "nqn.2016-06.io.spdk:cnode11518", 00:09:36.411 "min_cntlid": 65520, 00:09:36.411 "method": "nvmf_create_subsystem", 00:09:36.411 "req_id": 1 00:09:36.411 } 00:09:36.411 Got JSON-RPC error response 00:09:36.411 response: 00:09:36.411 { 00:09:36.411 "code": -32602, 00:09:36.411 "message": "Invalid cntlid range [65520-65519]" 00:09:36.411 }' 00:09:36.411 15:51:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:36.411 { 00:09:36.411 "nqn": "nqn.2016-06.io.spdk:cnode11518", 00:09:36.411 "min_cntlid": 65520, 00:09:36.411 "method": "nvmf_create_subsystem", 00:09:36.411 "req_id": 1 00:09:36.411 } 00:09:36.411 Got JSON-RPC error response 00:09:36.411 response: 00:09:36.411 { 00:09:36.411 "code": -32602, 00:09:36.411 "message": "Invalid cntlid range [65520-65519]" 00:09:36.411 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:36.411 15:51:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21996 -I 0 00:09:36.667 [2024-07-15 15:51:03.356693] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21996: invalid cntlid range [1-0] 00:09:36.667 15:51:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:36.667 { 00:09:36.667 "nqn": "nqn.2016-06.io.spdk:cnode21996", 00:09:36.667 "max_cntlid": 0, 00:09:36.667 "method": "nvmf_create_subsystem", 00:09:36.667 "req_id": 1 00:09:36.667 } 00:09:36.667 Got JSON-RPC error response 00:09:36.667 response: 00:09:36.667 { 00:09:36.667 "code": -32602, 00:09:36.667 "message": "Invalid cntlid range [1-0]" 00:09:36.667 }' 00:09:36.667 15:51:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:36.667 { 00:09:36.667 "nqn": "nqn.2016-06.io.spdk:cnode21996", 00:09:36.667 "max_cntlid": 0, 00:09:36.667 "method": "nvmf_create_subsystem", 00:09:36.667 "req_id": 1 00:09:36.667 } 00:09:36.667 Got JSON-RPC error response 00:09:36.667 response: 00:09:36.667 { 00:09:36.667 "code": -32602, 00:09:36.667 "message": "Invalid cntlid range [1-0]" 00:09:36.667 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:36.667 15:51:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30423 -I 65520 00:09:36.923 [2024-07-15 15:51:03.621585] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30423: invalid cntlid range [1-65520] 00:09:36.923 15:51:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:36.923 { 00:09:36.923 "nqn": "nqn.2016-06.io.spdk:cnode30423", 00:09:36.923 "max_cntlid": 65520, 00:09:36.923 "method": "nvmf_create_subsystem", 00:09:36.923 "req_id": 1 00:09:36.923 } 00:09:36.923 Got JSON-RPC error response 00:09:36.923 response: 00:09:36.923 { 00:09:36.923 "code": -32602, 00:09:36.923 "message": "Invalid cntlid range [1-65520]" 00:09:36.923 }' 00:09:36.923 15:51:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:36.923 { 00:09:36.923 "nqn": "nqn.2016-06.io.spdk:cnode30423", 00:09:36.923 "max_cntlid": 65520, 00:09:36.923 "method": "nvmf_create_subsystem", 00:09:36.923 "req_id": 1 00:09:36.923 } 00:09:36.923 Got JSON-RPC error response 00:09:36.923 response: 00:09:36.923 { 00:09:36.923 "code": -32602, 00:09:36.923 "message": "Invalid cntlid range [1-65520]" 00:09:36.923 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:36.923 15:51:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11751 -i 6 -I 5 00:09:37.182 [2024-07-15 15:51:03.874427] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11751: invalid cntlid range [6-5] 00:09:37.182 15:51:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:37.182 { 00:09:37.182 "nqn": "nqn.2016-06.io.spdk:cnode11751", 00:09:37.182 "min_cntlid": 6, 00:09:37.182 "max_cntlid": 5, 00:09:37.182 "method": "nvmf_create_subsystem", 00:09:37.182 "req_id": 1 00:09:37.182 } 00:09:37.182 Got JSON-RPC error response 00:09:37.182 response: 00:09:37.182 { 00:09:37.182 "code": -32602, 00:09:37.182 "message": "Invalid cntlid range [6-5]" 00:09:37.182 }' 00:09:37.182 15:51:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:37.182 { 00:09:37.182 "nqn": "nqn.2016-06.io.spdk:cnode11751", 00:09:37.182 "min_cntlid": 6, 00:09:37.182 "max_cntlid": 5, 00:09:37.182 "method": "nvmf_create_subsystem", 00:09:37.182 "req_id": 1 00:09:37.182 } 00:09:37.182 Got JSON-RPC error response 00:09:37.182 response: 00:09:37.182 { 00:09:37.182 "code": -32602, 00:09:37.182 "message": "Invalid cntlid range [6-5]" 00:09:37.182 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.182 15:51:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:37.182 { 00:09:37.182 "name": "foobar", 00:09:37.182 "method": "nvmf_delete_target", 00:09:37.182 "req_id": 1 00:09:37.182 } 00:09:37.182 Got JSON-RPC error response 00:09:37.182 response: 00:09:37.182 { 00:09:37.182 "code": -32602, 00:09:37.182 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:37.182 }' 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:37.182 { 00:09:37.182 "name": "foobar", 00:09:37.182 "method": "nvmf_delete_target", 00:09:37.182 "req_id": 1 00:09:37.182 } 00:09:37.182 Got JSON-RPC error response 00:09:37.182 response: 00:09:37.182 { 00:09:37.182 "code": -32602, 00:09:37.182 "message": "The specified target doesn't exist, cannot delete it." 00:09:37.182 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.182 rmmod nvme_tcp 00:09:37.182 rmmod nvme_fabrics 00:09:37.182 rmmod nvme_keyring 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1072625 ']' 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1072625 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1072625 ']' 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1072625 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1072625 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1072625' 00:09:37.182 killing process with pid 1072625 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1072625 00:09:37.182 15:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1072625 00:09:37.441 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.441 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.700 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.700 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.700 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.700 15:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.700 15:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.700 15:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.601 15:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:39.601 00:09:39.601 real 0m8.643s 00:09:39.601 user 0m20.086s 00:09:39.601 sys 0m2.397s 00:09:39.601 15:51:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.601 15:51:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:39.601 ************************************ 00:09:39.601 END TEST nvmf_invalid 00:09:39.601 ************************************ 00:09:39.601 15:51:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:39.601 15:51:06 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:39.601 15:51:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:39.601 15:51:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.601 15:51:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:39.601 ************************************ 00:09:39.601 START TEST nvmf_abort 00:09:39.601 ************************************ 00:09:39.601 15:51:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:39.601 * Looking for test storage... 00:09:39.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.601 15:51:06 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.601 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:39.601 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:39.602 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:39.860 15:51:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.763 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:41.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:41.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:41.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:41.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:41.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:09:41.764 00:09:41.764 --- 10.0.0.2 ping statistics --- 00:09:41.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.764 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:09:41.764 00:09:41.764 --- 10.0.0.1 ping statistics --- 00:09:41.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.764 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1075756 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1075756 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1075756 ']' 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.764 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:41.764 [2024-07-15 15:51:08.548423] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:41.764 [2024-07-15 15:51:08.548500] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.764 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.764 [2024-07-15 15:51:08.612965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:42.023 [2024-07-15 15:51:08.724047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.023 [2024-07-15 15:51:08.724112] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.023 [2024-07-15 15:51:08.724125] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.023 [2024-07-15 15:51:08.724136] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.023 [2024-07-15 15:51:08.724146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.023 [2024-07-15 15:51:08.724232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.023 [2024-07-15 15:51:08.724327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.023 [2024-07-15 15:51:08.724330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:42.023 [2024-07-15 15:51:08.866261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:42.023 Malloc0 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:42.023 Delay0 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:42.023 [2024-07-15 15:51:08.945333] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.023 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:42.280 15:51:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.280 15:51:08 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:42.280 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.280 [2024-07-15 15:51:09.083111] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:44.809 Initializing NVMe Controllers 00:09:44.809 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:44.809 controller IO queue size 128 less than required 00:09:44.809 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:44.809 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:44.809 Initialization complete. Launching workers. 00:09:44.809 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 32731 00:09:44.809 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32796, failed to submit 62 00:09:44.809 success 32735, unsuccess 61, failed 0 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.809 rmmod nvme_tcp 00:09:44.809 rmmod nvme_fabrics 00:09:44.809 rmmod nvme_keyring 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1075756 ']' 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1075756 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1075756 ']' 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1075756 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1075756 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1075756' 00:09:44.809 killing process with pid 1075756 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1075756 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1075756 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.809 15:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.346 15:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.346 00:09:47.346 real 0m7.217s 00:09:47.346 user 0m10.896s 00:09:47.346 sys 0m2.433s 00:09:47.346 15:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.346 15:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.346 ************************************ 00:09:47.346 END TEST nvmf_abort 00:09:47.346 ************************************ 00:09:47.346 15:51:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:47.346 15:51:13 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:47.346 15:51:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:47.346 15:51:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.346 15:51:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.346 ************************************ 00:09:47.346 START TEST nvmf_ns_hotplug_stress 00:09:47.346 ************************************ 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:47.346 * Looking for test storage... 00:09:47.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.346 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:47.347 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:47.347 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.347 15:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.247 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:49.247 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:49.248 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:49.248 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:49.248 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:49.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:09:49.248 00:09:49.248 --- 10.0.0.2 ping statistics --- 00:09:49.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.248 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:09:49.248 00:09:49.248 --- 10.0.0.1 ping statistics --- 00:09:49.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.248 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1078167 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1078167 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1078167 ']' 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.248 15:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:49.248 [2024-07-15 15:51:15.964720] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:49.248 [2024-07-15 15:51:15.964809] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.248 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.248 [2024-07-15 15:51:16.038179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.248 [2024-07-15 15:51:16.158220] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.248 [2024-07-15 15:51:16.158287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.248 [2024-07-15 15:51:16.158303] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.248 [2024-07-15 15:51:16.158317] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.248 [2024-07-15 15:51:16.158327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.248 [2024-07-15 15:51:16.158416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.248 [2024-07-15 15:51:16.158473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.248 [2024-07-15 15:51:16.158476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.241 15:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.241 15:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:50.241 15:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:50.241 15:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:50.241 15:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.241 15:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.241 15:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:50.241 15:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:50.241 [2024-07-15 15:51:17.145428] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.241 15:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:50.804 15:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.804 [2024-07-15 15:51:17.708416] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.804 15:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:51.061 15:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:51.623 Malloc0 00:09:51.623 15:51:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:51.623 Delay0 00:09:51.623 15:51:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.879 15:51:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:52.136 NULL1 00:09:52.136 15:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:52.392 15:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1078640 00:09:52.392 15:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:52.392 15:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:09:52.393 15:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.393 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.649 15:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.905 15:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:52.905 15:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:53.162 true 00:09:53.162 15:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:09:53.162 15:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.418 15:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.675 15:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:53.675 15:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:53.932 true 00:09:53.932 15:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:09:53.932 15:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.189 15:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.446 15:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:54.446 15:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:54.704 true 00:09:54.704 15:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:09:54.704 15:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.635 Read completed with error (sct=0, sc=11) 00:09:55.891 15:51:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.148 15:51:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:56.148 15:51:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:56.148 true 00:09:56.406 15:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:09:56.406 15:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.663 15:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.663 15:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:56.663 15:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:56.920 true 00:09:56.920 15:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:09:56.920 15:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.852 15:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.110 15:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:58.110 15:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:58.367 true 00:09:58.367 15:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:09:58.367 15:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.625 15:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.882 15:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:58.882 15:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:59.140 true 00:09:59.140 15:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:09:59.140 15:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.073 15:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.330 15:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:00.330 15:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:00.587 true 00:10:00.587 15:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:00.587 15:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.845 15:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.102 15:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:01.102 15:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:01.359 true 00:10:01.359 15:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:01.359 15:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.617 15:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.874 15:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:01.874 15:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:02.132 true 00:10:02.132 15:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:02.132 15:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.064 15:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.064 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.321 15:51:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:03.321 15:51:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:03.579 true 00:10:03.579 15:51:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:03.579 15:51:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.836 15:51:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.093 15:51:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:04.093 15:51:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:04.362 true 00:10:04.362 15:51:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:04.362 15:51:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.300 15:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.558 15:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:05.558 15:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:05.817 true 00:10:05.817 15:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:05.817 15:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.074 15:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.330 15:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:06.330 15:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:06.587 true 00:10:06.587 15:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:06.587 15:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.517 15:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.774 15:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:07.774 15:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:08.031 true 00:10:08.031 15:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:08.031 15:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.288 15:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.545 15:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:08.545 15:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:08.802 true 00:10:08.802 15:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:08.802 15:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.730 15:51:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.730 15:51:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:09.730 15:51:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:09.987 true 00:10:09.987 15:51:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:09.987 15:51:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.244 15:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.500 15:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:10.500 15:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:10.757 true 00:10:10.757 15:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:10.757 15:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.684 15:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.940 15:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:11.940 15:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:12.196 true 00:10:12.196 15:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:12.196 15:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.452 15:51:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.710 15:51:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:12.710 15:51:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:12.966 true 00:10:12.966 15:51:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:12.966 15:51:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.896 15:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.153 15:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:14.153 15:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:14.409 true 00:10:14.409 15:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:14.409 15:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.665 15:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.947 15:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:14.947 15:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:15.204 true 00:10:15.204 15:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:15.204 15:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.461 15:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.717 15:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:15.717 15:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:15.974 true 00:10:15.974 15:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:15.974 15:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.904 15:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.162 15:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:17.162 15:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:17.420 true 00:10:17.420 15:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:17.420 15:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.352 15:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.352 15:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:18.352 15:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:18.610 true 00:10:18.610 15:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:18.610 15:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.868 15:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.136 15:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:19.136 15:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:19.442 true 00:10:19.442 15:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:19.442 15:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.375 15:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.633 15:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:20.633 15:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:20.890 true 00:10:20.891 15:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:20.891 15:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.148 15:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.405 15:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:21.406 15:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:21.663 true 00:10:21.663 15:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:21.663 15:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.596 15:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.853 15:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:22.853 15:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:22.853 Initializing NVMe Controllers 00:10:22.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:22.853 Controller IO queue size 128, less than required. 00:10:22.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:22.853 Controller IO queue size 128, less than required. 00:10:22.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:22.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:22.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:22.853 Initialization complete. Launching workers. 00:10:22.853 ======================================================== 00:10:22.853 Latency(us) 00:10:22.853 Device Information : IOPS MiB/s Average min max 00:10:22.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 771.47 0.38 80899.46 2670.49 1083866.83 00:10:22.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10324.87 5.04 12361.58 2925.23 452386.73 00:10:22.853 ======================================================== 00:10:22.853 Total : 11096.34 5.42 17126.68 2670.49 1083866.83 00:10:22.853 00:10:23.111 true 00:10:23.111 15:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1078640 00:10:23.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1078640) - No such process 00:10:23.111 15:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1078640 00:10:23.111 15:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.368 15:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:23.626 15:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:23.626 15:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:23.626 15:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:23.626 15:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:23.626 15:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:23.884 null0 00:10:23.884 15:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:23.884 15:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:23.884 15:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:24.141 null1 00:10:24.141 15:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.141 15:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.141 15:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:24.399 null2 00:10:24.399 15:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.399 15:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.399 15:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:24.657 null3 00:10:24.657 15:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.657 15:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.657 15:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:24.915 null4 00:10:24.915 15:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.915 15:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.915 15:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:25.172 null5 00:10:25.172 15:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:25.172 15:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:25.172 15:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:25.430 null6 00:10:25.430 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:25.430 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:25.430 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:25.688 null7 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.688 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1082578 1082579 1082581 1082583 1082585 1082587 1082589 1082591 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.689 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:25.948 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:25.948 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.948 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:25.948 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:25.948 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:25.948 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:25.948 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:25.948 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:26.206 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.206 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.206 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.207 15:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:26.465 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:26.465 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.465 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:26.466 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:26.466 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:26.466 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:26.466 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:26.466 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.724 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:26.981 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:26.981 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:26.981 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:26.981 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.981 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:26.981 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:26.981 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:26.981 15:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.238 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:27.495 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:27.495 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:27.495 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:27.495 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.495 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:27.495 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:27.495 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:27.495 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.753 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:28.010 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.010 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:28.010 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:28.010 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.010 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:28.010 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:28.010 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.010 15:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.267 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:28.525 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:28.525 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.525 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:28.525 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:28.525 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.525 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:28.525 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.525 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.783 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:29.041 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.041 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.041 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:29.041 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:29.041 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:29.298 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:29.298 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.298 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:29.298 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:29.298 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:29.298 15:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.556 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:29.814 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:29.814 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:29.814 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:29.814 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:29.814 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:29.814 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.814 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:29.814 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.072 15:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:30.330 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:30.330 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:30.331 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:30.331 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:30.331 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:30.331 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.331 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:30.331 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.589 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:30.847 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:30.847 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:30.847 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:30.847 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:30.847 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:30.847 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.847 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:30.847 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:31.105 rmmod nvme_tcp 00:10:31.105 rmmod nvme_fabrics 00:10:31.105 rmmod nvme_keyring 00:10:31.105 15:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.105 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:31.105 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:31.105 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1078167 ']' 00:10:31.106 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1078167 00:10:31.106 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1078167 ']' 00:10:31.106 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1078167 00:10:31.106 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:31.106 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:31.106 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1078167 00:10:31.106 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:31.106 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:31.106 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1078167' 00:10:31.106 killing process with pid 1078167 00:10:31.106 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1078167 00:10:31.106 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1078167 00:10:31.673 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:31.673 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:31.673 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:31.673 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:31.673 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:31.673 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.673 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.673 15:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.624 15:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:33.624 00:10:33.624 real 0m46.628s 00:10:33.624 user 3m32.612s 00:10:33.625 sys 0m15.954s 00:10:33.625 15:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.625 15:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.625 ************************************ 00:10:33.625 END TEST nvmf_ns_hotplug_stress 00:10:33.625 ************************************ 00:10:33.625 15:52:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:33.625 15:52:00 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:33.625 15:52:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:33.625 15:52:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.625 15:52:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:33.625 ************************************ 00:10:33.625 START TEST nvmf_connect_stress 00:10:33.625 ************************************ 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:33.625 * Looking for test storage... 00:10:33.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:33.625 15:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:36.157 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:36.158 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:36.158 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:36.158 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:36.158 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:36.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:10:36.158 00:10:36.158 --- 10.0.0.2 ping statistics --- 00:10:36.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.158 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:10:36.158 00:10:36.158 --- 10.0.0.1 ping statistics --- 00:10:36.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.158 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1085344 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1085344 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1085344 ']' 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.158 15:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.158 [2024-07-15 15:52:02.737094] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:36.158 [2024-07-15 15:52:02.737189] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.158 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.158 [2024-07-15 15:52:02.805277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:36.158 [2024-07-15 15:52:02.916582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.158 [2024-07-15 15:52:02.916630] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.158 [2024-07-15 15:52:02.916659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.158 [2024-07-15 15:52:02.916672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.158 [2024-07-15 15:52:02.916689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.158 [2024-07-15 15:52:02.916751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.158 [2024-07-15 15:52:02.916812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.158 [2024-07-15 15:52:02.916815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.158 [2024-07-15 15:52:03.066916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.158 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.159 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.159 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.416 [2024-07-15 15:52:03.096040] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.416 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.416 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:36.416 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.416 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.416 NULL1 00:10:36.416 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.416 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1085492 00:10:36.416 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:36.416 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:36.416 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:36.416 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:36.416 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.417 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.675 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.675 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:36.675 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.675 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.675 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.932 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.932 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:36.932 15:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.932 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.932 15:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.497 15:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.497 15:52:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:37.497 15:52:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.497 15:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.497 15:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.755 15:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.755 15:52:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:37.755 15:52:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.755 15:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.755 15:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.013 15:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.013 15:52:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:38.013 15:52:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.013 15:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.013 15:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.271 15:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.271 15:52:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:38.271 15:52:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.271 15:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.271 15:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.528 15:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.528 15:52:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:38.528 15:52:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.528 15:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.528 15:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.093 15:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.093 15:52:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:39.093 15:52:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.093 15:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.093 15:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.351 15:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.351 15:52:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:39.351 15:52:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.351 15:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.351 15:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.608 15:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.608 15:52:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:39.608 15:52:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.608 15:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.608 15:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.866 15:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.866 15:52:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:39.866 15:52:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.866 15:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.866 15:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.123 15:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.123 15:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:40.123 15:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.123 15:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.123 15:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.688 15:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.688 15:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:40.688 15:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.688 15:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.688 15:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.945 15:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.945 15:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:40.945 15:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.945 15:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.945 15:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.203 15:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.203 15:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:41.203 15:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.203 15:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.203 15:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.460 15:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.460 15:52:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:41.460 15:52:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.460 15:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.460 15:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.718 15:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.718 15:52:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:41.718 15:52:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.718 15:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.718 15:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.283 15:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.283 15:52:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:42.283 15:52:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.283 15:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.283 15:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.540 15:52:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.540 15:52:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:42.540 15:52:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.540 15:52:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.540 15:52:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.798 15:52:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.798 15:52:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:42.798 15:52:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.798 15:52:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.798 15:52:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.055 15:52:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.055 15:52:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:43.055 15:52:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.055 15:52:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.055 15:52:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.313 15:52:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.313 15:52:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:43.313 15:52:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.313 15:52:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.313 15:52:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.877 15:52:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.877 15:52:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:43.877 15:52:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.877 15:52:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.877 15:52:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.134 15:52:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.134 15:52:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:44.134 15:52:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.134 15:52:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.134 15:52:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.392 15:52:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.392 15:52:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:44.392 15:52:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.392 15:52:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.392 15:52:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.649 15:52:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.649 15:52:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:44.649 15:52:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.649 15:52:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.649 15:52:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.906 15:52:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.906 15:52:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:44.906 15:52:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.906 15:52:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.906 15:52:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.471 15:52:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.471 15:52:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:45.471 15:52:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.471 15:52:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.471 15:52:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.728 15:52:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.728 15:52:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:45.728 15:52:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.728 15:52:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.728 15:52:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.986 15:52:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.986 15:52:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:45.986 15:52:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.986 15:52:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.986 15:52:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.243 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.243 15:52:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:46.243 15:52:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.243 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.243 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.509 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:46.775 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.775 15:52:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1085492 00:10:46.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1085492) - No such process 00:10:46.775 15:52:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1085492 00:10:46.775 15:52:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:46.775 15:52:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:46.775 15:52:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:46.775 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:46.775 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:46.775 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:46.775 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:46.775 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.775 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:46.775 rmmod nvme_tcp 00:10:46.775 rmmod nvme_fabrics 00:10:46.775 rmmod nvme_keyring 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1085344 ']' 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1085344 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1085344 ']' 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1085344 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1085344 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1085344' 00:10:46.776 killing process with pid 1085344 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1085344 00:10:46.776 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1085344 00:10:47.034 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:47.034 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:47.034 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:47.034 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:47.034 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:47.034 15:52:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.034 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.034 15:52:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.934 15:52:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:48.934 00:10:48.934 real 0m15.427s 00:10:48.934 user 0m38.278s 00:10:48.934 sys 0m5.937s 00:10:48.934 15:52:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.934 15:52:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.934 ************************************ 00:10:48.934 END TEST nvmf_connect_stress 00:10:48.934 ************************************ 00:10:48.934 15:52:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:48.934 15:52:15 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:48.934 15:52:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:48.934 15:52:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.934 15:52:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:49.192 ************************************ 00:10:49.192 START TEST nvmf_fused_ordering 00:10:49.192 ************************************ 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:49.192 * Looking for test storage... 00:10:49.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:49.192 15:52:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:51.089 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.089 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:51.090 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:51.090 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:51.090 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.090 15:52:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.090 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.090 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.090 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:10:51.090 00:10:51.090 --- 10.0.0.2 ping statistics --- 00:10:51.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.090 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:10:51.348 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:10:51.349 00:10:51.349 --- 10.0.0.1 ping statistics --- 00:10:51.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.349 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1088641 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1088641 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1088641 ']' 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.349 15:52:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:51.349 [2024-07-15 15:52:18.103950] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:51.349 [2024-07-15 15:52:18.104027] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.349 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.349 [2024-07-15 15:52:18.173462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.606 [2024-07-15 15:52:18.290218] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.607 [2024-07-15 15:52:18.290292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.607 [2024-07-15 15:52:18.290315] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.607 [2024-07-15 15:52:18.290328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.607 [2024-07-15 15:52:18.290339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.607 [2024-07-15 15:52:18.290375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.171 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.172 [2024-07-15 15:52:19.070053] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.172 [2024-07-15 15:52:19.086212] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.172 NULL1 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.172 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.429 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.429 15:52:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:52.429 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.429 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.429 15:52:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.429 15:52:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:52.429 [2024-07-15 15:52:19.131944] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:52.429 [2024-07-15 15:52:19.131997] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088795 ] 00:10:52.429 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.994 Attached to nqn.2016-06.io.spdk:cnode1 00:10:52.994 Namespace ID: 1 size: 1GB 00:10:52.994 fused_ordering(0) 00:10:52.994 fused_ordering(1) 00:10:52.994 fused_ordering(2) 00:10:52.994 fused_ordering(3) 00:10:52.994 fused_ordering(4) 00:10:52.994 fused_ordering(5) 00:10:52.994 fused_ordering(6) 00:10:52.994 fused_ordering(7) 00:10:52.994 fused_ordering(8) 00:10:52.994 fused_ordering(9) 00:10:52.994 fused_ordering(10) 00:10:52.994 fused_ordering(11) 00:10:52.994 fused_ordering(12) 00:10:52.994 fused_ordering(13) 00:10:52.994 fused_ordering(14) 00:10:52.994 fused_ordering(15) 00:10:52.994 fused_ordering(16) 00:10:52.994 fused_ordering(17) 00:10:52.994 fused_ordering(18) 00:10:52.994 fused_ordering(19) 00:10:52.994 fused_ordering(20) 00:10:52.994 fused_ordering(21) 00:10:52.994 fused_ordering(22) 00:10:52.994 fused_ordering(23) 00:10:52.994 fused_ordering(24) 00:10:52.994 fused_ordering(25) 00:10:52.994 fused_ordering(26) 00:10:52.994 fused_ordering(27) 00:10:52.994 fused_ordering(28) 00:10:52.994 fused_ordering(29) 00:10:52.994 fused_ordering(30) 00:10:52.994 fused_ordering(31) 00:10:52.994 fused_ordering(32) 00:10:52.994 fused_ordering(33) 00:10:52.994 fused_ordering(34) 00:10:52.994 fused_ordering(35) 00:10:52.994 fused_ordering(36) 00:10:52.994 fused_ordering(37) 00:10:52.994 fused_ordering(38) 00:10:52.994 fused_ordering(39) 00:10:52.994 fused_ordering(40) 00:10:52.994 fused_ordering(41) 00:10:52.994 fused_ordering(42) 00:10:52.994 fused_ordering(43) 00:10:52.994 fused_ordering(44) 00:10:52.994 fused_ordering(45) 00:10:52.994 fused_ordering(46) 00:10:52.994 fused_ordering(47) 00:10:52.994 fused_ordering(48) 00:10:52.994 fused_ordering(49) 00:10:52.994 fused_ordering(50) 00:10:52.994 fused_ordering(51) 00:10:52.994 fused_ordering(52) 00:10:52.994 fused_ordering(53) 00:10:52.994 fused_ordering(54) 00:10:52.994 fused_ordering(55) 00:10:52.994 fused_ordering(56) 00:10:52.994 fused_ordering(57) 00:10:52.994 fused_ordering(58) 00:10:52.994 fused_ordering(59) 00:10:52.994 fused_ordering(60) 00:10:52.994 fused_ordering(61) 00:10:52.994 fused_ordering(62) 00:10:52.994 fused_ordering(63) 00:10:52.994 fused_ordering(64) 00:10:52.994 fused_ordering(65) 00:10:52.994 fused_ordering(66) 00:10:52.994 fused_ordering(67) 00:10:52.994 fused_ordering(68) 00:10:52.994 fused_ordering(69) 00:10:52.994 fused_ordering(70) 00:10:52.994 fused_ordering(71) 00:10:52.994 fused_ordering(72) 00:10:52.994 fused_ordering(73) 00:10:52.994 fused_ordering(74) 00:10:52.994 fused_ordering(75) 00:10:52.994 fused_ordering(76) 00:10:52.994 fused_ordering(77) 00:10:52.994 fused_ordering(78) 00:10:52.994 fused_ordering(79) 00:10:52.994 fused_ordering(80) 00:10:52.994 fused_ordering(81) 00:10:52.994 fused_ordering(82) 00:10:52.994 fused_ordering(83) 00:10:52.994 fused_ordering(84) 00:10:52.995 fused_ordering(85) 00:10:52.995 fused_ordering(86) 00:10:52.995 fused_ordering(87) 00:10:52.995 fused_ordering(88) 00:10:52.995 fused_ordering(89) 00:10:52.995 fused_ordering(90) 00:10:52.995 fused_ordering(91) 00:10:52.995 fused_ordering(92) 00:10:52.995 fused_ordering(93) 00:10:52.995 fused_ordering(94) 00:10:52.995 fused_ordering(95) 00:10:52.995 fused_ordering(96) 00:10:52.995 fused_ordering(97) 00:10:52.995 fused_ordering(98) 00:10:52.995 fused_ordering(99) 00:10:52.995 fused_ordering(100) 00:10:52.995 fused_ordering(101) 00:10:52.995 fused_ordering(102) 00:10:52.995 fused_ordering(103) 00:10:52.995 fused_ordering(104) 00:10:52.995 fused_ordering(105) 00:10:52.995 fused_ordering(106) 00:10:52.995 fused_ordering(107) 00:10:52.995 fused_ordering(108) 00:10:52.995 fused_ordering(109) 00:10:52.995 fused_ordering(110) 00:10:52.995 fused_ordering(111) 00:10:52.995 fused_ordering(112) 00:10:52.995 fused_ordering(113) 00:10:52.995 fused_ordering(114) 00:10:52.995 fused_ordering(115) 00:10:52.995 fused_ordering(116) 00:10:52.995 fused_ordering(117) 00:10:52.995 fused_ordering(118) 00:10:52.995 fused_ordering(119) 00:10:52.995 fused_ordering(120) 00:10:52.995 fused_ordering(121) 00:10:52.995 fused_ordering(122) 00:10:52.995 fused_ordering(123) 00:10:52.995 fused_ordering(124) 00:10:52.995 fused_ordering(125) 00:10:52.995 fused_ordering(126) 00:10:52.995 fused_ordering(127) 00:10:52.995 fused_ordering(128) 00:10:52.995 fused_ordering(129) 00:10:52.995 fused_ordering(130) 00:10:52.995 fused_ordering(131) 00:10:52.995 fused_ordering(132) 00:10:52.995 fused_ordering(133) 00:10:52.995 fused_ordering(134) 00:10:52.995 fused_ordering(135) 00:10:52.995 fused_ordering(136) 00:10:52.995 fused_ordering(137) 00:10:52.995 fused_ordering(138) 00:10:52.995 fused_ordering(139) 00:10:52.995 fused_ordering(140) 00:10:52.995 fused_ordering(141) 00:10:52.995 fused_ordering(142) 00:10:52.995 fused_ordering(143) 00:10:52.995 fused_ordering(144) 00:10:52.995 fused_ordering(145) 00:10:52.995 fused_ordering(146) 00:10:52.995 fused_ordering(147) 00:10:52.995 fused_ordering(148) 00:10:52.995 fused_ordering(149) 00:10:52.995 fused_ordering(150) 00:10:52.995 fused_ordering(151) 00:10:52.995 fused_ordering(152) 00:10:52.995 fused_ordering(153) 00:10:52.995 fused_ordering(154) 00:10:52.995 fused_ordering(155) 00:10:52.995 fused_ordering(156) 00:10:52.995 fused_ordering(157) 00:10:52.995 fused_ordering(158) 00:10:52.995 fused_ordering(159) 00:10:52.995 fused_ordering(160) 00:10:52.995 fused_ordering(161) 00:10:52.995 fused_ordering(162) 00:10:52.995 fused_ordering(163) 00:10:52.995 fused_ordering(164) 00:10:52.995 fused_ordering(165) 00:10:52.995 fused_ordering(166) 00:10:52.995 fused_ordering(167) 00:10:52.995 fused_ordering(168) 00:10:52.995 fused_ordering(169) 00:10:52.995 fused_ordering(170) 00:10:52.995 fused_ordering(171) 00:10:52.995 fused_ordering(172) 00:10:52.995 fused_ordering(173) 00:10:52.995 fused_ordering(174) 00:10:52.995 fused_ordering(175) 00:10:52.995 fused_ordering(176) 00:10:52.995 fused_ordering(177) 00:10:52.995 fused_ordering(178) 00:10:52.995 fused_ordering(179) 00:10:52.995 fused_ordering(180) 00:10:52.995 fused_ordering(181) 00:10:52.995 fused_ordering(182) 00:10:52.995 fused_ordering(183) 00:10:52.995 fused_ordering(184) 00:10:52.995 fused_ordering(185) 00:10:52.995 fused_ordering(186) 00:10:52.995 fused_ordering(187) 00:10:52.995 fused_ordering(188) 00:10:52.995 fused_ordering(189) 00:10:52.995 fused_ordering(190) 00:10:52.995 fused_ordering(191) 00:10:52.995 fused_ordering(192) 00:10:52.995 fused_ordering(193) 00:10:52.995 fused_ordering(194) 00:10:52.995 fused_ordering(195) 00:10:52.995 fused_ordering(196) 00:10:52.995 fused_ordering(197) 00:10:52.995 fused_ordering(198) 00:10:52.995 fused_ordering(199) 00:10:52.995 fused_ordering(200) 00:10:52.995 fused_ordering(201) 00:10:52.995 fused_ordering(202) 00:10:52.995 fused_ordering(203) 00:10:52.995 fused_ordering(204) 00:10:52.995 fused_ordering(205) 00:10:53.561 fused_ordering(206) 00:10:53.561 fused_ordering(207) 00:10:53.561 fused_ordering(208) 00:10:53.561 fused_ordering(209) 00:10:53.561 fused_ordering(210) 00:10:53.561 fused_ordering(211) 00:10:53.561 fused_ordering(212) 00:10:53.561 fused_ordering(213) 00:10:53.561 fused_ordering(214) 00:10:53.561 fused_ordering(215) 00:10:53.561 fused_ordering(216) 00:10:53.561 fused_ordering(217) 00:10:53.561 fused_ordering(218) 00:10:53.561 fused_ordering(219) 00:10:53.561 fused_ordering(220) 00:10:53.561 fused_ordering(221) 00:10:53.561 fused_ordering(222) 00:10:53.561 fused_ordering(223) 00:10:53.561 fused_ordering(224) 00:10:53.561 fused_ordering(225) 00:10:53.561 fused_ordering(226) 00:10:53.561 fused_ordering(227) 00:10:53.561 fused_ordering(228) 00:10:53.561 fused_ordering(229) 00:10:53.561 fused_ordering(230) 00:10:53.561 fused_ordering(231) 00:10:53.561 fused_ordering(232) 00:10:53.561 fused_ordering(233) 00:10:53.561 fused_ordering(234) 00:10:53.561 fused_ordering(235) 00:10:53.561 fused_ordering(236) 00:10:53.561 fused_ordering(237) 00:10:53.561 fused_ordering(238) 00:10:53.561 fused_ordering(239) 00:10:53.561 fused_ordering(240) 00:10:53.561 fused_ordering(241) 00:10:53.561 fused_ordering(242) 00:10:53.561 fused_ordering(243) 00:10:53.561 fused_ordering(244) 00:10:53.561 fused_ordering(245) 00:10:53.561 fused_ordering(246) 00:10:53.561 fused_ordering(247) 00:10:53.561 fused_ordering(248) 00:10:53.561 fused_ordering(249) 00:10:53.561 fused_ordering(250) 00:10:53.561 fused_ordering(251) 00:10:53.561 fused_ordering(252) 00:10:53.561 fused_ordering(253) 00:10:53.561 fused_ordering(254) 00:10:53.561 fused_ordering(255) 00:10:53.561 fused_ordering(256) 00:10:53.561 fused_ordering(257) 00:10:53.561 fused_ordering(258) 00:10:53.561 fused_ordering(259) 00:10:53.561 fused_ordering(260) 00:10:53.561 fused_ordering(261) 00:10:53.561 fused_ordering(262) 00:10:53.561 fused_ordering(263) 00:10:53.561 fused_ordering(264) 00:10:53.561 fused_ordering(265) 00:10:53.561 fused_ordering(266) 00:10:53.561 fused_ordering(267) 00:10:53.561 fused_ordering(268) 00:10:53.561 fused_ordering(269) 00:10:53.561 fused_ordering(270) 00:10:53.561 fused_ordering(271) 00:10:53.561 fused_ordering(272) 00:10:53.561 fused_ordering(273) 00:10:53.561 fused_ordering(274) 00:10:53.561 fused_ordering(275) 00:10:53.561 fused_ordering(276) 00:10:53.561 fused_ordering(277) 00:10:53.561 fused_ordering(278) 00:10:53.561 fused_ordering(279) 00:10:53.561 fused_ordering(280) 00:10:53.561 fused_ordering(281) 00:10:53.561 fused_ordering(282) 00:10:53.561 fused_ordering(283) 00:10:53.561 fused_ordering(284) 00:10:53.561 fused_ordering(285) 00:10:53.561 fused_ordering(286) 00:10:53.561 fused_ordering(287) 00:10:53.561 fused_ordering(288) 00:10:53.561 fused_ordering(289) 00:10:53.561 fused_ordering(290) 00:10:53.561 fused_ordering(291) 00:10:53.561 fused_ordering(292) 00:10:53.561 fused_ordering(293) 00:10:53.561 fused_ordering(294) 00:10:53.561 fused_ordering(295) 00:10:53.561 fused_ordering(296) 00:10:53.561 fused_ordering(297) 00:10:53.561 fused_ordering(298) 00:10:53.561 fused_ordering(299) 00:10:53.561 fused_ordering(300) 00:10:53.561 fused_ordering(301) 00:10:53.561 fused_ordering(302) 00:10:53.561 fused_ordering(303) 00:10:53.561 fused_ordering(304) 00:10:53.561 fused_ordering(305) 00:10:53.561 fused_ordering(306) 00:10:53.561 fused_ordering(307) 00:10:53.561 fused_ordering(308) 00:10:53.561 fused_ordering(309) 00:10:53.561 fused_ordering(310) 00:10:53.561 fused_ordering(311) 00:10:53.561 fused_ordering(312) 00:10:53.561 fused_ordering(313) 00:10:53.561 fused_ordering(314) 00:10:53.561 fused_ordering(315) 00:10:53.561 fused_ordering(316) 00:10:53.561 fused_ordering(317) 00:10:53.561 fused_ordering(318) 00:10:53.561 fused_ordering(319) 00:10:53.561 fused_ordering(320) 00:10:53.561 fused_ordering(321) 00:10:53.561 fused_ordering(322) 00:10:53.561 fused_ordering(323) 00:10:53.561 fused_ordering(324) 00:10:53.561 fused_ordering(325) 00:10:53.561 fused_ordering(326) 00:10:53.561 fused_ordering(327) 00:10:53.561 fused_ordering(328) 00:10:53.561 fused_ordering(329) 00:10:53.561 fused_ordering(330) 00:10:53.561 fused_ordering(331) 00:10:53.561 fused_ordering(332) 00:10:53.561 fused_ordering(333) 00:10:53.561 fused_ordering(334) 00:10:53.562 fused_ordering(335) 00:10:53.562 fused_ordering(336) 00:10:53.562 fused_ordering(337) 00:10:53.562 fused_ordering(338) 00:10:53.562 fused_ordering(339) 00:10:53.562 fused_ordering(340) 00:10:53.562 fused_ordering(341) 00:10:53.562 fused_ordering(342) 00:10:53.562 fused_ordering(343) 00:10:53.562 fused_ordering(344) 00:10:53.562 fused_ordering(345) 00:10:53.562 fused_ordering(346) 00:10:53.562 fused_ordering(347) 00:10:53.562 fused_ordering(348) 00:10:53.562 fused_ordering(349) 00:10:53.562 fused_ordering(350) 00:10:53.562 fused_ordering(351) 00:10:53.562 fused_ordering(352) 00:10:53.562 fused_ordering(353) 00:10:53.562 fused_ordering(354) 00:10:53.562 fused_ordering(355) 00:10:53.562 fused_ordering(356) 00:10:53.562 fused_ordering(357) 00:10:53.562 fused_ordering(358) 00:10:53.562 fused_ordering(359) 00:10:53.562 fused_ordering(360) 00:10:53.562 fused_ordering(361) 00:10:53.562 fused_ordering(362) 00:10:53.562 fused_ordering(363) 00:10:53.562 fused_ordering(364) 00:10:53.562 fused_ordering(365) 00:10:53.562 fused_ordering(366) 00:10:53.562 fused_ordering(367) 00:10:53.562 fused_ordering(368) 00:10:53.562 fused_ordering(369) 00:10:53.562 fused_ordering(370) 00:10:53.562 fused_ordering(371) 00:10:53.562 fused_ordering(372) 00:10:53.562 fused_ordering(373) 00:10:53.562 fused_ordering(374) 00:10:53.562 fused_ordering(375) 00:10:53.562 fused_ordering(376) 00:10:53.562 fused_ordering(377) 00:10:53.562 fused_ordering(378) 00:10:53.562 fused_ordering(379) 00:10:53.562 fused_ordering(380) 00:10:53.562 fused_ordering(381) 00:10:53.562 fused_ordering(382) 00:10:53.562 fused_ordering(383) 00:10:53.562 fused_ordering(384) 00:10:53.562 fused_ordering(385) 00:10:53.562 fused_ordering(386) 00:10:53.562 fused_ordering(387) 00:10:53.562 fused_ordering(388) 00:10:53.562 fused_ordering(389) 00:10:53.562 fused_ordering(390) 00:10:53.562 fused_ordering(391) 00:10:53.562 fused_ordering(392) 00:10:53.562 fused_ordering(393) 00:10:53.562 fused_ordering(394) 00:10:53.562 fused_ordering(395) 00:10:53.562 fused_ordering(396) 00:10:53.562 fused_ordering(397) 00:10:53.562 fused_ordering(398) 00:10:53.562 fused_ordering(399) 00:10:53.562 fused_ordering(400) 00:10:53.562 fused_ordering(401) 00:10:53.562 fused_ordering(402) 00:10:53.562 fused_ordering(403) 00:10:53.562 fused_ordering(404) 00:10:53.562 fused_ordering(405) 00:10:53.562 fused_ordering(406) 00:10:53.562 fused_ordering(407) 00:10:53.562 fused_ordering(408) 00:10:53.562 fused_ordering(409) 00:10:53.562 fused_ordering(410) 00:10:54.127 fused_ordering(411) 00:10:54.127 fused_ordering(412) 00:10:54.127 fused_ordering(413) 00:10:54.127 fused_ordering(414) 00:10:54.127 fused_ordering(415) 00:10:54.127 fused_ordering(416) 00:10:54.127 fused_ordering(417) 00:10:54.127 fused_ordering(418) 00:10:54.127 fused_ordering(419) 00:10:54.127 fused_ordering(420) 00:10:54.127 fused_ordering(421) 00:10:54.127 fused_ordering(422) 00:10:54.127 fused_ordering(423) 00:10:54.127 fused_ordering(424) 00:10:54.127 fused_ordering(425) 00:10:54.127 fused_ordering(426) 00:10:54.127 fused_ordering(427) 00:10:54.127 fused_ordering(428) 00:10:54.127 fused_ordering(429) 00:10:54.127 fused_ordering(430) 00:10:54.127 fused_ordering(431) 00:10:54.127 fused_ordering(432) 00:10:54.127 fused_ordering(433) 00:10:54.127 fused_ordering(434) 00:10:54.127 fused_ordering(435) 00:10:54.127 fused_ordering(436) 00:10:54.127 fused_ordering(437) 00:10:54.127 fused_ordering(438) 00:10:54.127 fused_ordering(439) 00:10:54.127 fused_ordering(440) 00:10:54.127 fused_ordering(441) 00:10:54.127 fused_ordering(442) 00:10:54.127 fused_ordering(443) 00:10:54.127 fused_ordering(444) 00:10:54.127 fused_ordering(445) 00:10:54.127 fused_ordering(446) 00:10:54.127 fused_ordering(447) 00:10:54.127 fused_ordering(448) 00:10:54.127 fused_ordering(449) 00:10:54.127 fused_ordering(450) 00:10:54.127 fused_ordering(451) 00:10:54.127 fused_ordering(452) 00:10:54.127 fused_ordering(453) 00:10:54.127 fused_ordering(454) 00:10:54.127 fused_ordering(455) 00:10:54.127 fused_ordering(456) 00:10:54.127 fused_ordering(457) 00:10:54.127 fused_ordering(458) 00:10:54.127 fused_ordering(459) 00:10:54.127 fused_ordering(460) 00:10:54.127 fused_ordering(461) 00:10:54.127 fused_ordering(462) 00:10:54.127 fused_ordering(463) 00:10:54.127 fused_ordering(464) 00:10:54.127 fused_ordering(465) 00:10:54.127 fused_ordering(466) 00:10:54.127 fused_ordering(467) 00:10:54.127 fused_ordering(468) 00:10:54.127 fused_ordering(469) 00:10:54.127 fused_ordering(470) 00:10:54.127 fused_ordering(471) 00:10:54.127 fused_ordering(472) 00:10:54.127 fused_ordering(473) 00:10:54.127 fused_ordering(474) 00:10:54.127 fused_ordering(475) 00:10:54.127 fused_ordering(476) 00:10:54.127 fused_ordering(477) 00:10:54.127 fused_ordering(478) 00:10:54.127 fused_ordering(479) 00:10:54.127 fused_ordering(480) 00:10:54.127 fused_ordering(481) 00:10:54.127 fused_ordering(482) 00:10:54.127 fused_ordering(483) 00:10:54.127 fused_ordering(484) 00:10:54.127 fused_ordering(485) 00:10:54.127 fused_ordering(486) 00:10:54.127 fused_ordering(487) 00:10:54.127 fused_ordering(488) 00:10:54.127 fused_ordering(489) 00:10:54.127 fused_ordering(490) 00:10:54.127 fused_ordering(491) 00:10:54.127 fused_ordering(492) 00:10:54.127 fused_ordering(493) 00:10:54.127 fused_ordering(494) 00:10:54.127 fused_ordering(495) 00:10:54.127 fused_ordering(496) 00:10:54.127 fused_ordering(497) 00:10:54.127 fused_ordering(498) 00:10:54.127 fused_ordering(499) 00:10:54.127 fused_ordering(500) 00:10:54.127 fused_ordering(501) 00:10:54.127 fused_ordering(502) 00:10:54.127 fused_ordering(503) 00:10:54.127 fused_ordering(504) 00:10:54.127 fused_ordering(505) 00:10:54.127 fused_ordering(506) 00:10:54.127 fused_ordering(507) 00:10:54.127 fused_ordering(508) 00:10:54.127 fused_ordering(509) 00:10:54.127 fused_ordering(510) 00:10:54.127 fused_ordering(511) 00:10:54.127 fused_ordering(512) 00:10:54.127 fused_ordering(513) 00:10:54.127 fused_ordering(514) 00:10:54.127 fused_ordering(515) 00:10:54.127 fused_ordering(516) 00:10:54.127 fused_ordering(517) 00:10:54.127 fused_ordering(518) 00:10:54.127 fused_ordering(519) 00:10:54.127 fused_ordering(520) 00:10:54.127 fused_ordering(521) 00:10:54.127 fused_ordering(522) 00:10:54.127 fused_ordering(523) 00:10:54.127 fused_ordering(524) 00:10:54.127 fused_ordering(525) 00:10:54.127 fused_ordering(526) 00:10:54.127 fused_ordering(527) 00:10:54.127 fused_ordering(528) 00:10:54.127 fused_ordering(529) 00:10:54.127 fused_ordering(530) 00:10:54.127 fused_ordering(531) 00:10:54.127 fused_ordering(532) 00:10:54.127 fused_ordering(533) 00:10:54.127 fused_ordering(534) 00:10:54.127 fused_ordering(535) 00:10:54.127 fused_ordering(536) 00:10:54.127 fused_ordering(537) 00:10:54.127 fused_ordering(538) 00:10:54.127 fused_ordering(539) 00:10:54.127 fused_ordering(540) 00:10:54.127 fused_ordering(541) 00:10:54.127 fused_ordering(542) 00:10:54.127 fused_ordering(543) 00:10:54.127 fused_ordering(544) 00:10:54.127 fused_ordering(545) 00:10:54.127 fused_ordering(546) 00:10:54.127 fused_ordering(547) 00:10:54.127 fused_ordering(548) 00:10:54.127 fused_ordering(549) 00:10:54.127 fused_ordering(550) 00:10:54.127 fused_ordering(551) 00:10:54.127 fused_ordering(552) 00:10:54.127 fused_ordering(553) 00:10:54.127 fused_ordering(554) 00:10:54.127 fused_ordering(555) 00:10:54.127 fused_ordering(556) 00:10:54.127 fused_ordering(557) 00:10:54.127 fused_ordering(558) 00:10:54.127 fused_ordering(559) 00:10:54.127 fused_ordering(560) 00:10:54.127 fused_ordering(561) 00:10:54.127 fused_ordering(562) 00:10:54.127 fused_ordering(563) 00:10:54.127 fused_ordering(564) 00:10:54.127 fused_ordering(565) 00:10:54.127 fused_ordering(566) 00:10:54.127 fused_ordering(567) 00:10:54.127 fused_ordering(568) 00:10:54.127 fused_ordering(569) 00:10:54.127 fused_ordering(570) 00:10:54.127 fused_ordering(571) 00:10:54.127 fused_ordering(572) 00:10:54.127 fused_ordering(573) 00:10:54.127 fused_ordering(574) 00:10:54.127 fused_ordering(575) 00:10:54.127 fused_ordering(576) 00:10:54.127 fused_ordering(577) 00:10:54.127 fused_ordering(578) 00:10:54.127 fused_ordering(579) 00:10:54.127 fused_ordering(580) 00:10:54.127 fused_ordering(581) 00:10:54.127 fused_ordering(582) 00:10:54.127 fused_ordering(583) 00:10:54.127 fused_ordering(584) 00:10:54.127 fused_ordering(585) 00:10:54.127 fused_ordering(586) 00:10:54.127 fused_ordering(587) 00:10:54.127 fused_ordering(588) 00:10:54.127 fused_ordering(589) 00:10:54.127 fused_ordering(590) 00:10:54.127 fused_ordering(591) 00:10:54.127 fused_ordering(592) 00:10:54.127 fused_ordering(593) 00:10:54.127 fused_ordering(594) 00:10:54.127 fused_ordering(595) 00:10:54.127 fused_ordering(596) 00:10:54.127 fused_ordering(597) 00:10:54.127 fused_ordering(598) 00:10:54.127 fused_ordering(599) 00:10:54.127 fused_ordering(600) 00:10:54.127 fused_ordering(601) 00:10:54.127 fused_ordering(602) 00:10:54.127 fused_ordering(603) 00:10:54.127 fused_ordering(604) 00:10:54.127 fused_ordering(605) 00:10:54.127 fused_ordering(606) 00:10:54.127 fused_ordering(607) 00:10:54.127 fused_ordering(608) 00:10:54.127 fused_ordering(609) 00:10:54.127 fused_ordering(610) 00:10:54.127 fused_ordering(611) 00:10:54.127 fused_ordering(612) 00:10:54.127 fused_ordering(613) 00:10:54.127 fused_ordering(614) 00:10:54.127 fused_ordering(615) 00:10:54.693 fused_ordering(616) 00:10:54.693 fused_ordering(617) 00:10:54.693 fused_ordering(618) 00:10:54.693 fused_ordering(619) 00:10:54.693 fused_ordering(620) 00:10:54.693 fused_ordering(621) 00:10:54.693 fused_ordering(622) 00:10:54.693 fused_ordering(623) 00:10:54.693 fused_ordering(624) 00:10:54.693 fused_ordering(625) 00:10:54.693 fused_ordering(626) 00:10:54.693 fused_ordering(627) 00:10:54.693 fused_ordering(628) 00:10:54.693 fused_ordering(629) 00:10:54.693 fused_ordering(630) 00:10:54.693 fused_ordering(631) 00:10:54.693 fused_ordering(632) 00:10:54.693 fused_ordering(633) 00:10:54.693 fused_ordering(634) 00:10:54.693 fused_ordering(635) 00:10:54.693 fused_ordering(636) 00:10:54.693 fused_ordering(637) 00:10:54.693 fused_ordering(638) 00:10:54.693 fused_ordering(639) 00:10:54.693 fused_ordering(640) 00:10:54.693 fused_ordering(641) 00:10:54.693 fused_ordering(642) 00:10:54.693 fused_ordering(643) 00:10:54.693 fused_ordering(644) 00:10:54.693 fused_ordering(645) 00:10:54.693 fused_ordering(646) 00:10:54.693 fused_ordering(647) 00:10:54.693 fused_ordering(648) 00:10:54.693 fused_ordering(649) 00:10:54.693 fused_ordering(650) 00:10:54.693 fused_ordering(651) 00:10:54.693 fused_ordering(652) 00:10:54.693 fused_ordering(653) 00:10:54.693 fused_ordering(654) 00:10:54.693 fused_ordering(655) 00:10:54.693 fused_ordering(656) 00:10:54.693 fused_ordering(657) 00:10:54.693 fused_ordering(658) 00:10:54.693 fused_ordering(659) 00:10:54.693 fused_ordering(660) 00:10:54.693 fused_ordering(661) 00:10:54.693 fused_ordering(662) 00:10:54.693 fused_ordering(663) 00:10:54.693 fused_ordering(664) 00:10:54.693 fused_ordering(665) 00:10:54.693 fused_ordering(666) 00:10:54.693 fused_ordering(667) 00:10:54.693 fused_ordering(668) 00:10:54.693 fused_ordering(669) 00:10:54.693 fused_ordering(670) 00:10:54.693 fused_ordering(671) 00:10:54.693 fused_ordering(672) 00:10:54.693 fused_ordering(673) 00:10:54.693 fused_ordering(674) 00:10:54.693 fused_ordering(675) 00:10:54.693 fused_ordering(676) 00:10:54.693 fused_ordering(677) 00:10:54.693 fused_ordering(678) 00:10:54.693 fused_ordering(679) 00:10:54.693 fused_ordering(680) 00:10:54.693 fused_ordering(681) 00:10:54.693 fused_ordering(682) 00:10:54.693 fused_ordering(683) 00:10:54.693 fused_ordering(684) 00:10:54.693 fused_ordering(685) 00:10:54.693 fused_ordering(686) 00:10:54.693 fused_ordering(687) 00:10:54.693 fused_ordering(688) 00:10:54.693 fused_ordering(689) 00:10:54.693 fused_ordering(690) 00:10:54.693 fused_ordering(691) 00:10:54.693 fused_ordering(692) 00:10:54.693 fused_ordering(693) 00:10:54.693 fused_ordering(694) 00:10:54.693 fused_ordering(695) 00:10:54.693 fused_ordering(696) 00:10:54.693 fused_ordering(697) 00:10:54.693 fused_ordering(698) 00:10:54.693 fused_ordering(699) 00:10:54.693 fused_ordering(700) 00:10:54.693 fused_ordering(701) 00:10:54.693 fused_ordering(702) 00:10:54.693 fused_ordering(703) 00:10:54.693 fused_ordering(704) 00:10:54.693 fused_ordering(705) 00:10:54.693 fused_ordering(706) 00:10:54.693 fused_ordering(707) 00:10:54.693 fused_ordering(708) 00:10:54.693 fused_ordering(709) 00:10:54.693 fused_ordering(710) 00:10:54.693 fused_ordering(711) 00:10:54.693 fused_ordering(712) 00:10:54.693 fused_ordering(713) 00:10:54.693 fused_ordering(714) 00:10:54.693 fused_ordering(715) 00:10:54.693 fused_ordering(716) 00:10:54.693 fused_ordering(717) 00:10:54.693 fused_ordering(718) 00:10:54.693 fused_ordering(719) 00:10:54.693 fused_ordering(720) 00:10:54.693 fused_ordering(721) 00:10:54.693 fused_ordering(722) 00:10:54.693 fused_ordering(723) 00:10:54.693 fused_ordering(724) 00:10:54.693 fused_ordering(725) 00:10:54.693 fused_ordering(726) 00:10:54.693 fused_ordering(727) 00:10:54.693 fused_ordering(728) 00:10:54.693 fused_ordering(729) 00:10:54.693 fused_ordering(730) 00:10:54.693 fused_ordering(731) 00:10:54.693 fused_ordering(732) 00:10:54.693 fused_ordering(733) 00:10:54.693 fused_ordering(734) 00:10:54.693 fused_ordering(735) 00:10:54.693 fused_ordering(736) 00:10:54.693 fused_ordering(737) 00:10:54.693 fused_ordering(738) 00:10:54.693 fused_ordering(739) 00:10:54.693 fused_ordering(740) 00:10:54.693 fused_ordering(741) 00:10:54.693 fused_ordering(742) 00:10:54.693 fused_ordering(743) 00:10:54.693 fused_ordering(744) 00:10:54.693 fused_ordering(745) 00:10:54.693 fused_ordering(746) 00:10:54.693 fused_ordering(747) 00:10:54.693 fused_ordering(748) 00:10:54.693 fused_ordering(749) 00:10:54.693 fused_ordering(750) 00:10:54.693 fused_ordering(751) 00:10:54.693 fused_ordering(752) 00:10:54.693 fused_ordering(753) 00:10:54.693 fused_ordering(754) 00:10:54.693 fused_ordering(755) 00:10:54.693 fused_ordering(756) 00:10:54.693 fused_ordering(757) 00:10:54.693 fused_ordering(758) 00:10:54.693 fused_ordering(759) 00:10:54.693 fused_ordering(760) 00:10:54.693 fused_ordering(761) 00:10:54.693 fused_ordering(762) 00:10:54.693 fused_ordering(763) 00:10:54.693 fused_ordering(764) 00:10:54.693 fused_ordering(765) 00:10:54.693 fused_ordering(766) 00:10:54.693 fused_ordering(767) 00:10:54.693 fused_ordering(768) 00:10:54.693 fused_ordering(769) 00:10:54.693 fused_ordering(770) 00:10:54.693 fused_ordering(771) 00:10:54.693 fused_ordering(772) 00:10:54.693 fused_ordering(773) 00:10:54.693 fused_ordering(774) 00:10:54.693 fused_ordering(775) 00:10:54.693 fused_ordering(776) 00:10:54.693 fused_ordering(777) 00:10:54.693 fused_ordering(778) 00:10:54.693 fused_ordering(779) 00:10:54.693 fused_ordering(780) 00:10:54.693 fused_ordering(781) 00:10:54.693 fused_ordering(782) 00:10:54.693 fused_ordering(783) 00:10:54.693 fused_ordering(784) 00:10:54.693 fused_ordering(785) 00:10:54.693 fused_ordering(786) 00:10:54.693 fused_ordering(787) 00:10:54.693 fused_ordering(788) 00:10:54.693 fused_ordering(789) 00:10:54.693 fused_ordering(790) 00:10:54.693 fused_ordering(791) 00:10:54.693 fused_ordering(792) 00:10:54.693 fused_ordering(793) 00:10:54.693 fused_ordering(794) 00:10:54.693 fused_ordering(795) 00:10:54.693 fused_ordering(796) 00:10:54.693 fused_ordering(797) 00:10:54.693 fused_ordering(798) 00:10:54.693 fused_ordering(799) 00:10:54.693 fused_ordering(800) 00:10:54.693 fused_ordering(801) 00:10:54.693 fused_ordering(802) 00:10:54.693 fused_ordering(803) 00:10:54.693 fused_ordering(804) 00:10:54.693 fused_ordering(805) 00:10:54.693 fused_ordering(806) 00:10:54.693 fused_ordering(807) 00:10:54.694 fused_ordering(808) 00:10:54.694 fused_ordering(809) 00:10:54.694 fused_ordering(810) 00:10:54.694 fused_ordering(811) 00:10:54.694 fused_ordering(812) 00:10:54.694 fused_ordering(813) 00:10:54.694 fused_ordering(814) 00:10:54.694 fused_ordering(815) 00:10:54.694 fused_ordering(816) 00:10:54.694 fused_ordering(817) 00:10:54.694 fused_ordering(818) 00:10:54.694 fused_ordering(819) 00:10:54.694 fused_ordering(820) 00:10:55.658 fused_ordering(821) 00:10:55.658 fused_ordering(822) 00:10:55.658 fused_ordering(823) 00:10:55.658 fused_ordering(824) 00:10:55.658 fused_ordering(825) 00:10:55.658 fused_ordering(826) 00:10:55.658 fused_ordering(827) 00:10:55.658 fused_ordering(828) 00:10:55.658 fused_ordering(829) 00:10:55.658 fused_ordering(830) 00:10:55.658 fused_ordering(831) 00:10:55.658 fused_ordering(832) 00:10:55.658 fused_ordering(833) 00:10:55.658 fused_ordering(834) 00:10:55.658 fused_ordering(835) 00:10:55.658 fused_ordering(836) 00:10:55.658 fused_ordering(837) 00:10:55.658 fused_ordering(838) 00:10:55.658 fused_ordering(839) 00:10:55.658 fused_ordering(840) 00:10:55.658 fused_ordering(841) 00:10:55.658 fused_ordering(842) 00:10:55.658 fused_ordering(843) 00:10:55.658 fused_ordering(844) 00:10:55.658 fused_ordering(845) 00:10:55.658 fused_ordering(846) 00:10:55.658 fused_ordering(847) 00:10:55.658 fused_ordering(848) 00:10:55.658 fused_ordering(849) 00:10:55.658 fused_ordering(850) 00:10:55.658 fused_ordering(851) 00:10:55.658 fused_ordering(852) 00:10:55.658 fused_ordering(853) 00:10:55.658 fused_ordering(854) 00:10:55.658 fused_ordering(855) 00:10:55.658 fused_ordering(856) 00:10:55.658 fused_ordering(857) 00:10:55.658 fused_ordering(858) 00:10:55.658 fused_ordering(859) 00:10:55.658 fused_ordering(860) 00:10:55.658 fused_ordering(861) 00:10:55.658 fused_ordering(862) 00:10:55.658 fused_ordering(863) 00:10:55.658 fused_ordering(864) 00:10:55.658 fused_ordering(865) 00:10:55.658 fused_ordering(866) 00:10:55.658 fused_ordering(867) 00:10:55.658 fused_ordering(868) 00:10:55.658 fused_ordering(869) 00:10:55.658 fused_ordering(870) 00:10:55.658 fused_ordering(871) 00:10:55.658 fused_ordering(872) 00:10:55.658 fused_ordering(873) 00:10:55.658 fused_ordering(874) 00:10:55.658 fused_ordering(875) 00:10:55.658 fused_ordering(876) 00:10:55.658 fused_ordering(877) 00:10:55.658 fused_ordering(878) 00:10:55.658 fused_ordering(879) 00:10:55.658 fused_ordering(880) 00:10:55.658 fused_ordering(881) 00:10:55.658 fused_ordering(882) 00:10:55.658 fused_ordering(883) 00:10:55.658 fused_ordering(884) 00:10:55.658 fused_ordering(885) 00:10:55.658 fused_ordering(886) 00:10:55.658 fused_ordering(887) 00:10:55.658 fused_ordering(888) 00:10:55.658 fused_ordering(889) 00:10:55.658 fused_ordering(890) 00:10:55.658 fused_ordering(891) 00:10:55.658 fused_ordering(892) 00:10:55.658 fused_ordering(893) 00:10:55.658 fused_ordering(894) 00:10:55.658 fused_ordering(895) 00:10:55.658 fused_ordering(896) 00:10:55.658 fused_ordering(897) 00:10:55.658 fused_ordering(898) 00:10:55.658 fused_ordering(899) 00:10:55.658 fused_ordering(900) 00:10:55.658 fused_ordering(901) 00:10:55.658 fused_ordering(902) 00:10:55.658 fused_ordering(903) 00:10:55.658 fused_ordering(904) 00:10:55.658 fused_ordering(905) 00:10:55.658 fused_ordering(906) 00:10:55.658 fused_ordering(907) 00:10:55.658 fused_ordering(908) 00:10:55.658 fused_ordering(909) 00:10:55.658 fused_ordering(910) 00:10:55.658 fused_ordering(911) 00:10:55.658 fused_ordering(912) 00:10:55.658 fused_ordering(913) 00:10:55.658 fused_ordering(914) 00:10:55.658 fused_ordering(915) 00:10:55.658 fused_ordering(916) 00:10:55.658 fused_ordering(917) 00:10:55.658 fused_ordering(918) 00:10:55.658 fused_ordering(919) 00:10:55.658 fused_ordering(920) 00:10:55.658 fused_ordering(921) 00:10:55.658 fused_ordering(922) 00:10:55.658 fused_ordering(923) 00:10:55.658 fused_ordering(924) 00:10:55.658 fused_ordering(925) 00:10:55.658 fused_ordering(926) 00:10:55.658 fused_ordering(927) 00:10:55.658 fused_ordering(928) 00:10:55.658 fused_ordering(929) 00:10:55.658 fused_ordering(930) 00:10:55.658 fused_ordering(931) 00:10:55.658 fused_ordering(932) 00:10:55.658 fused_ordering(933) 00:10:55.658 fused_ordering(934) 00:10:55.658 fused_ordering(935) 00:10:55.658 fused_ordering(936) 00:10:55.658 fused_ordering(937) 00:10:55.658 fused_ordering(938) 00:10:55.658 fused_ordering(939) 00:10:55.658 fused_ordering(940) 00:10:55.658 fused_ordering(941) 00:10:55.658 fused_ordering(942) 00:10:55.658 fused_ordering(943) 00:10:55.658 fused_ordering(944) 00:10:55.658 fused_ordering(945) 00:10:55.658 fused_ordering(946) 00:10:55.658 fused_ordering(947) 00:10:55.658 fused_ordering(948) 00:10:55.658 fused_ordering(949) 00:10:55.658 fused_ordering(950) 00:10:55.658 fused_ordering(951) 00:10:55.658 fused_ordering(952) 00:10:55.658 fused_ordering(953) 00:10:55.658 fused_ordering(954) 00:10:55.658 fused_ordering(955) 00:10:55.658 fused_ordering(956) 00:10:55.658 fused_ordering(957) 00:10:55.658 fused_ordering(958) 00:10:55.658 fused_ordering(959) 00:10:55.658 fused_ordering(960) 00:10:55.658 fused_ordering(961) 00:10:55.658 fused_ordering(962) 00:10:55.658 fused_ordering(963) 00:10:55.658 fused_ordering(964) 00:10:55.658 fused_ordering(965) 00:10:55.658 fused_ordering(966) 00:10:55.658 fused_ordering(967) 00:10:55.658 fused_ordering(968) 00:10:55.658 fused_ordering(969) 00:10:55.658 fused_ordering(970) 00:10:55.658 fused_ordering(971) 00:10:55.658 fused_ordering(972) 00:10:55.658 fused_ordering(973) 00:10:55.658 fused_ordering(974) 00:10:55.658 fused_ordering(975) 00:10:55.658 fused_ordering(976) 00:10:55.658 fused_ordering(977) 00:10:55.658 fused_ordering(978) 00:10:55.658 fused_ordering(979) 00:10:55.658 fused_ordering(980) 00:10:55.658 fused_ordering(981) 00:10:55.658 fused_ordering(982) 00:10:55.658 fused_ordering(983) 00:10:55.658 fused_ordering(984) 00:10:55.658 fused_ordering(985) 00:10:55.658 fused_ordering(986) 00:10:55.658 fused_ordering(987) 00:10:55.658 fused_ordering(988) 00:10:55.658 fused_ordering(989) 00:10:55.658 fused_ordering(990) 00:10:55.658 fused_ordering(991) 00:10:55.658 fused_ordering(992) 00:10:55.658 fused_ordering(993) 00:10:55.658 fused_ordering(994) 00:10:55.658 fused_ordering(995) 00:10:55.659 fused_ordering(996) 00:10:55.659 fused_ordering(997) 00:10:55.659 fused_ordering(998) 00:10:55.659 fused_ordering(999) 00:10:55.659 fused_ordering(1000) 00:10:55.659 fused_ordering(1001) 00:10:55.659 fused_ordering(1002) 00:10:55.659 fused_ordering(1003) 00:10:55.659 fused_ordering(1004) 00:10:55.659 fused_ordering(1005) 00:10:55.659 fused_ordering(1006) 00:10:55.659 fused_ordering(1007) 00:10:55.659 fused_ordering(1008) 00:10:55.659 fused_ordering(1009) 00:10:55.659 fused_ordering(1010) 00:10:55.659 fused_ordering(1011) 00:10:55.659 fused_ordering(1012) 00:10:55.659 fused_ordering(1013) 00:10:55.659 fused_ordering(1014) 00:10:55.659 fused_ordering(1015) 00:10:55.659 fused_ordering(1016) 00:10:55.659 fused_ordering(1017) 00:10:55.659 fused_ordering(1018) 00:10:55.659 fused_ordering(1019) 00:10:55.659 fused_ordering(1020) 00:10:55.659 fused_ordering(1021) 00:10:55.659 fused_ordering(1022) 00:10:55.659 fused_ordering(1023) 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.659 rmmod nvme_tcp 00:10:55.659 rmmod nvme_fabrics 00:10:55.659 rmmod nvme_keyring 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1088641 ']' 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1088641 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1088641 ']' 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1088641 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1088641 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1088641' 00:10:55.659 killing process with pid 1088641 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1088641 00:10:55.659 15:52:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1088641 00:10:55.917 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:55.917 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:55.917 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:55.917 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.917 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:55.917 15:52:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.917 15:52:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.917 15:52:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.447 15:52:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:58.447 00:10:58.447 real 0m8.971s 00:10:58.447 user 0m6.631s 00:10:58.447 sys 0m4.139s 00:10:58.447 15:52:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.447 15:52:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:58.447 ************************************ 00:10:58.447 END TEST nvmf_fused_ordering 00:10:58.447 ************************************ 00:10:58.447 15:52:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:58.447 15:52:24 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:58.447 15:52:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:58.447 15:52:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.447 15:52:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.447 ************************************ 00:10:58.447 START TEST nvmf_delete_subsystem 00:10:58.447 ************************************ 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:58.447 * Looking for test storage... 00:10:58.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.447 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:58.448 15:52:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:00.349 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:00.349 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.349 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:00.350 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:00.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:00.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:11:00.350 00:11:00.350 --- 10.0.0.2 ping statistics --- 00:11:00.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.350 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:11:00.350 00:11:00.350 --- 10.0.0.1 ping statistics --- 00:11:00.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.350 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1091116 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1091116 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1091116 ']' 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.350 15:52:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.350 [2024-07-15 15:52:27.026466] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:00.350 [2024-07-15 15:52:27.026550] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.350 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.350 [2024-07-15 15:52:27.094311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:00.350 [2024-07-15 15:52:27.209141] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.350 [2024-07-15 15:52:27.209213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.350 [2024-07-15 15:52:27.209229] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.350 [2024-07-15 15:52:27.209243] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.350 [2024-07-15 15:52:27.209255] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.350 [2024-07-15 15:52:27.209507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.350 [2024-07-15 15:52:27.209513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.284 [2024-07-15 15:52:28.034566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.284 [2024-07-15 15:52:28.050778] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.284 NULL1 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.284 Delay0 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1091277 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:01.284 15:52:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:01.284 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.284 [2024-07-15 15:52:28.125562] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:03.201 15:52:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.201 15:52:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.201 15:52:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 [2024-07-15 15:52:30.216247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d5c0 is same with the state(5) to be set 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 starting I/O failed: -6 00:11:03.472 [2024-07-15 15:52:30.216920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff10c000c00 is same with the state(5) to be set 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Write completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.472 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Write completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Write completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Write completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Write completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Write completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Write completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Write completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Write completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Write completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Write completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:03.473 Read completed with error (sct=0, sc=8) 00:11:04.406 [2024-07-15 15:52:31.181464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240eac0 is same with the state(5) to be set 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 [2024-07-15 15:52:31.218220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d980 is same with the state(5) to be set 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 [2024-07-15 15:52:31.218441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d7a0 is same with the state(5) to be set 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 [2024-07-15 15:52:31.218667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff10c00cfe0 is same with the state(5) to be set 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Write completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 Read completed with error (sct=0, sc=8) 00:11:04.406 [2024-07-15 15:52:31.218912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d3e0 is same with the state(5) to be set 00:11:04.406 Initializing NVMe Controllers 00:11:04.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:04.406 Controller IO queue size 128, less than required. 00:11:04.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:04.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:04.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:04.406 Initialization complete. Launching workers. 00:11:04.406 ======================================================== 00:11:04.406 Latency(us) 00:11:04.406 Device Information : IOPS MiB/s Average min max 00:11:04.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 180.67 0.09 955692.89 1015.52 1012472.70 00:11:04.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.26 0.08 844431.03 528.74 1011484.61 00:11:04.406 ======================================================== 00:11:04.406 Total : 349.93 0.17 901876.87 528.74 1012472.70 00:11:04.406 00:11:04.406 [2024-07-15 15:52:31.220124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240eac0 (9): Bad file descriptor 00:11:04.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:04.406 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.406 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:04.406 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1091277 00:11:04.406 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1091277 00:11:04.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1091277) - No such process 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1091277 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1091277 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1091277 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:04.970 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.971 [2024-07-15 15:52:31.744615] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1091681 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1091681 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:04.971 15:52:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:04.971 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.971 [2024-07-15 15:52:31.807403] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:05.536 15:52:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:05.536 15:52:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1091681 00:11:05.536 15:52:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:06.101 15:52:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:06.101 15:52:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1091681 00:11:06.101 15:52:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:06.359 15:52:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:06.359 15:52:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1091681 00:11:06.359 15:52:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:06.922 15:52:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:06.922 15:52:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1091681 00:11:06.922 15:52:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:07.486 15:52:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:07.486 15:52:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1091681 00:11:07.486 15:52:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:08.051 15:52:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:08.051 15:52:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1091681 00:11:08.051 15:52:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:08.309 Initializing NVMe Controllers 00:11:08.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:08.309 Controller IO queue size 128, less than required. 00:11:08.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:08.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:08.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:08.309 Initialization complete. Launching workers. 00:11:08.309 ======================================================== 00:11:08.309 Latency(us) 00:11:08.309 Device Information : IOPS MiB/s Average min max 00:11:08.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004529.11 1000228.26 1041114.47 00:11:08.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003475.42 1000240.76 1040952.07 00:11:08.309 ======================================================== 00:11:08.309 Total : 256.00 0.12 1004002.27 1000228.26 1041114.47 00:11:08.309 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1091681 00:11:08.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1091681) - No such process 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1091681 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.568 rmmod nvme_tcp 00:11:08.568 rmmod nvme_fabrics 00:11:08.568 rmmod nvme_keyring 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1091116 ']' 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1091116 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1091116 ']' 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1091116 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1091116 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1091116' 00:11:08.568 killing process with pid 1091116 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1091116 00:11:08.568 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1091116 00:11:08.826 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.826 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.826 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.826 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.826 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.826 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.826 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.826 15:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.363 15:52:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:11.363 00:11:11.363 real 0m12.784s 00:11:11.363 user 0m29.208s 00:11:11.363 sys 0m2.880s 00:11:11.363 15:52:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.363 15:52:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:11.363 ************************************ 00:11:11.363 END TEST nvmf_delete_subsystem 00:11:11.363 ************************************ 00:11:11.363 15:52:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:11.363 15:52:37 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:11.363 15:52:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:11.363 15:52:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.363 15:52:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:11.363 ************************************ 00:11:11.363 START TEST nvmf_ns_masking 00:11:11.363 ************************************ 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:11.363 * Looking for test storage... 00:11:11.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f14f8306-62a7-4fcf-b396-e287b8ab0bb7 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=fa4e4bbc-3761-4bab-b349-2866b9be339a 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=50da1c10-ae0d-42ce-b723-0c83387bb52c 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:11.363 15:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:11.364 15:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:13.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:13.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:13.262 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:13.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:13.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:11:13.262 00:11:13.262 --- 10.0.0.2 ping statistics --- 00:11:13.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.262 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:13.262 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:11:13.262 00:11:13.262 --- 10.0.0.1 ping statistics --- 00:11:13.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.263 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1094025 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1094025 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1094025 ']' 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.263 15:52:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:13.263 [2024-07-15 15:52:40.021488] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:13.263 [2024-07-15 15:52:40.021589] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.263 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.263 [2024-07-15 15:52:40.095622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.519 [2024-07-15 15:52:40.201418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.519 [2024-07-15 15:52:40.201470] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.519 [2024-07-15 15:52:40.201484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.519 [2024-07-15 15:52:40.201496] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.519 [2024-07-15 15:52:40.201506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.519 [2024-07-15 15:52:40.201540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.519 15:52:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.519 15:52:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:13.519 15:52:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.519 15:52:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:13.519 15:52:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:13.519 15:52:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.519 15:52:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:13.776 [2024-07-15 15:52:40.612331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.776 15:52:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:13.776 15:52:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:13.776 15:52:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:14.033 Malloc1 00:11:14.033 15:52:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:14.290 Malloc2 00:11:14.290 15:52:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.546 15:52:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:15.108 15:52:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.364 [2024-07-15 15:52:42.043543] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.364 15:52:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:15.365 15:52:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 50da1c10-ae0d-42ce-b723-0c83387bb52c -a 10.0.0.2 -s 4420 -i 4 00:11:15.365 15:52:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.365 15:52:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:15.365 15:52:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.365 15:52:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:15.365 15:52:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:17.280 15:52:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:17.280 15:52:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:17.280 15:52:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.280 15:52:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:17.280 15:52:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.280 15:52:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:17.280 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:17.280 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:17.537 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:17.537 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:17.537 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:17.537 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:17.537 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:17.537 [ 0]:0x1 00:11:17.537 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:17.537 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:17.537 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c828e58da5b44def8e56ecb212593c25 00:11:17.537 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c828e58da5b44def8e56ecb212593c25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.537 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:17.795 [ 0]:0x1 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c828e58da5b44def8e56ecb212593c25 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c828e58da5b44def8e56ecb212593c25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:17.795 [ 1]:0x2 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f34ffe8621b498baa97483a4cf8110b 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f34ffe8621b498baa97483a4cf8110b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:17.795 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.051 15:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.310 15:52:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:18.623 15:52:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:18.623 15:52:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 50da1c10-ae0d-42ce-b723-0c83387bb52c -a 10.0.0.2 -s 4420 -i 4 00:11:18.623 15:52:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:18.623 15:52:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:18.623 15:52:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.623 15:52:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:18.623 15:52:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:18.623 15:52:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:21.148 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:21.149 [ 0]:0x2 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f34ffe8621b498baa97483a4cf8110b 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f34ffe8621b498baa97483a4cf8110b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.149 15:52:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:21.149 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:21.149 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.149 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:21.149 [ 0]:0x1 00:11:21.149 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.149 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.149 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c828e58da5b44def8e56ecb212593c25 00:11:21.149 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c828e58da5b44def8e56ecb212593c25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.149 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:21.149 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.149 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:21.149 [ 1]:0x2 00:11:21.149 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.149 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.405 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f34ffe8621b498baa97483a4cf8110b 00:11:21.405 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f34ffe8621b498baa97483a4cf8110b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.405 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:21.662 [ 0]:0x2 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f34ffe8621b498baa97483a4cf8110b 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f34ffe8621b498baa97483a4cf8110b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.662 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:21.920 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:21.920 15:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 50da1c10-ae0d-42ce-b723-0c83387bb52c -a 10.0.0.2 -s 4420 -i 4 00:11:22.198 15:52:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:22.198 15:52:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:22.198 15:52:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.198 15:52:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:22.198 15:52:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:22.198 15:52:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:24.092 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:24.092 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:24.092 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.092 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:24.092 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.092 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:24.092 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:24.092 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:24.350 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:24.350 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:24.350 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:24.350 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:24.350 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:24.350 [ 0]:0x1 00:11:24.350 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:24.350 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:24.350 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c828e58da5b44def8e56ecb212593c25 00:11:24.350 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c828e58da5b44def8e56ecb212593c25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.350 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:24.350 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:24.350 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:24.607 [ 1]:0x2 00:11:24.607 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:24.607 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:24.607 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f34ffe8621b498baa97483a4cf8110b 00:11:24.607 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f34ffe8621b498baa97483a4cf8110b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.607 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:24.863 [ 0]:0x2 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f34ffe8621b498baa97483a4cf8110b 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f34ffe8621b498baa97483a4cf8110b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:24.863 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:25.121 [2024-07-15 15:52:51.929427] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:25.121 request: 00:11:25.121 { 00:11:25.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:25.121 "nsid": 2, 00:11:25.121 "host": "nqn.2016-06.io.spdk:host1", 00:11:25.121 "method": "nvmf_ns_remove_host", 00:11:25.121 "req_id": 1 00:11:25.121 } 00:11:25.121 Got JSON-RPC error response 00:11:25.121 response: 00:11:25.121 { 00:11:25.121 "code": -32602, 00:11:25.121 "message": "Invalid parameters" 00:11:25.121 } 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:25.121 15:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.121 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:25.121 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.121 15:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:25.121 15:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:25.121 15:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:25.121 15:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:25.121 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:25.121 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.121 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:25.379 [ 0]:0x2 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f34ffe8621b498baa97483a4cf8110b 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f34ffe8621b498baa97483a4cf8110b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1095650 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1095650 /var/tmp/host.sock 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1095650 ']' 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:25.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.379 15:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:25.379 [2024-07-15 15:52:52.280239] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:25.379 [2024-07-15 15:52:52.280321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095650 ] 00:11:25.379 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.638 [2024-07-15 15:52:52.341528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.638 [2024-07-15 15:52:52.453050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.894 15:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:25.894 15:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:25.894 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.151 15:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:26.417 15:52:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f14f8306-62a7-4fcf-b396-e287b8ab0bb7 00:11:26.417 15:52:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:26.417 15:52:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F14F830662A74FCFB396E287B8AB0BB7 -i 00:11:26.675 15:52:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid fa4e4bbc-3761-4bab-b349-2866b9be339a 00:11:26.675 15:52:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:26.675 15:52:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g FA4E4BBC37614BABB3492866B9BE339A -i 00:11:26.951 15:52:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:27.208 15:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:27.465 15:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:27.465 15:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:28.033 nvme0n1 00:11:28.033 15:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:28.033 15:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:28.289 nvme1n2 00:11:28.289 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:28.289 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:28.289 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:28.289 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:28.289 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:28.545 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:28.545 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:28.545 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:28.545 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:28.803 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f14f8306-62a7-4fcf-b396-e287b8ab0bb7 == \f\1\4\f\8\3\0\6\-\6\2\a\7\-\4\f\c\f\-\b\3\9\6\-\e\2\8\7\b\8\a\b\0\b\b\7 ]] 00:11:28.803 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:28.803 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:28.803 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:29.062 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ fa4e4bbc-3761-4bab-b349-2866b9be339a == \f\a\4\e\4\b\b\c\-\3\7\6\1\-\4\b\a\b\-\b\3\4\9\-\2\8\6\6\b\9\b\e\3\3\9\a ]] 00:11:29.062 15:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1095650 00:11:29.062 15:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1095650 ']' 00:11:29.062 15:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1095650 00:11:29.062 15:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:29.062 15:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.062 15:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1095650 00:11:29.062 15:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:29.062 15:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:29.062 15:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1095650' 00:11:29.062 killing process with pid 1095650 00:11:29.062 15:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1095650 00:11:29.062 15:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1095650 00:11:29.627 15:52:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.886 rmmod nvme_tcp 00:11:29.886 rmmod nvme_fabrics 00:11:29.886 rmmod nvme_keyring 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1094025 ']' 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1094025 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1094025 ']' 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1094025 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1094025 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1094025' 00:11:29.886 killing process with pid 1094025 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1094025 00:11:29.886 15:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1094025 00:11:30.454 15:52:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.454 15:52:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.454 15:52:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.454 15:52:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.454 15:52:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.454 15:52:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.454 15:52:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.454 15:52:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.359 15:52:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.359 00:11:32.359 real 0m21.416s 00:11:32.359 user 0m28.047s 00:11:32.359 sys 0m4.145s 00:11:32.359 15:52:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.359 15:52:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:32.359 ************************************ 00:11:32.359 END TEST nvmf_ns_masking 00:11:32.359 ************************************ 00:11:32.359 15:52:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:32.359 15:52:59 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:32.359 15:52:59 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:32.359 15:52:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:32.359 15:52:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.359 15:52:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:32.359 ************************************ 00:11:32.359 START TEST nvmf_nvme_cli 00:11:32.359 ************************************ 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:32.359 * Looking for test storage... 00:11:32.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.359 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:32.360 15:52:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:34.339 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:34.339 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:34.339 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:34.339 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.339 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.597 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.597 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.597 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:34.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:11:34.598 00:11:34.598 --- 10.0.0.2 ping statistics --- 00:11:34.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.598 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:11:34.598 00:11:34.598 --- 10.0.0.1 ping statistics --- 00:11:34.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.598 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1098151 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1098151 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1098151 ']' 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.598 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 [2024-07-15 15:53:01.423560] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:34.598 [2024-07-15 15:53:01.423643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.598 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.598 [2024-07-15 15:53:01.489427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.856 [2024-07-15 15:53:01.611846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.856 [2024-07-15 15:53:01.611915] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.856 [2024-07-15 15:53:01.611932] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.856 [2024-07-15 15:53:01.611946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.856 [2024-07-15 15:53:01.611957] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.856 [2024-07-15 15:53:01.612052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.856 [2024-07-15 15:53:01.612112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.856 [2024-07-15 15:53:01.612166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.856 [2024-07-15 15:53:01.612169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 [2024-07-15 15:53:01.776974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.856 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.114 Malloc0 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.114 Malloc1 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.114 [2024-07-15 15:53:01.862922] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:35.114 00:11:35.114 Discovery Log Number of Records 2, Generation counter 2 00:11:35.114 =====Discovery Log Entry 0====== 00:11:35.114 trtype: tcp 00:11:35.114 adrfam: ipv4 00:11:35.114 subtype: current discovery subsystem 00:11:35.114 treq: not required 00:11:35.114 portid: 0 00:11:35.114 trsvcid: 4420 00:11:35.114 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:35.114 traddr: 10.0.0.2 00:11:35.114 eflags: explicit discovery connections, duplicate discovery information 00:11:35.114 sectype: none 00:11:35.114 =====Discovery Log Entry 1====== 00:11:35.114 trtype: tcp 00:11:35.114 adrfam: ipv4 00:11:35.114 subtype: nvme subsystem 00:11:35.114 treq: not required 00:11:35.114 portid: 0 00:11:35.114 trsvcid: 4420 00:11:35.114 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:35.114 traddr: 10.0.0.2 00:11:35.114 eflags: none 00:11:35.114 sectype: none 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:35.114 15:53:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:35.679 15:53:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:35.679 15:53:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:35.679 15:53:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.679 15:53:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:35.679 15:53:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:35.679 15:53:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:38.200 /dev/nvme0n1 ]] 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:38.200 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:38.201 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.201 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:38.201 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.201 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:38.201 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:38.201 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.201 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:38.201 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:38.201 15:53:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.201 15:53:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:38.201 15:53:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:38.459 rmmod nvme_tcp 00:11:38.459 rmmod nvme_fabrics 00:11:38.459 rmmod nvme_keyring 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1098151 ']' 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1098151 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1098151 ']' 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1098151 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1098151 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1098151' 00:11:38.459 killing process with pid 1098151 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1098151 00:11:38.459 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1098151 00:11:38.718 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:38.718 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:38.718 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:38.718 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:38.718 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:38.718 15:53:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.718 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.718 15:53:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.251 15:53:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:41.251 00:11:41.251 real 0m8.426s 00:11:41.251 user 0m16.132s 00:11:41.251 sys 0m2.195s 00:11:41.251 15:53:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.251 15:53:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:41.251 ************************************ 00:11:41.251 END TEST nvmf_nvme_cli 00:11:41.251 ************************************ 00:11:41.251 15:53:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:41.251 15:53:07 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:41.251 15:53:07 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:41.251 15:53:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:41.251 15:53:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.251 15:53:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:41.251 ************************************ 00:11:41.251 START TEST nvmf_vfio_user 00:11:41.251 ************************************ 00:11:41.251 15:53:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:41.251 * Looking for test storage... 00:11:41.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1099081 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1099081' 00:11:41.252 Process pid: 1099081 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1099081 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1099081 ']' 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.252 15:53:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:41.252 [2024-07-15 15:53:07.800803] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:41.252 [2024-07-15 15:53:07.800889] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.252 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.252 [2024-07-15 15:53:07.862870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.252 [2024-07-15 15:53:07.983624] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.252 [2024-07-15 15:53:07.983698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.252 [2024-07-15 15:53:07.983714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.252 [2024-07-15 15:53:07.983727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.252 [2024-07-15 15:53:07.983739] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.252 [2024-07-15 15:53:07.983810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.252 [2024-07-15 15:53:07.983868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.252 [2024-07-15 15:53:07.983930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.252 [2024-07-15 15:53:07.983935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.183 15:53:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.183 15:53:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:42.183 15:53:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:43.115 15:53:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:43.373 15:53:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:43.373 15:53:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:43.373 15:53:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:43.373 15:53:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:43.373 15:53:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:43.630 Malloc1 00:11:43.630 15:53:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:43.887 15:53:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:44.144 15:53:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:44.402 15:53:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:44.402 15:53:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:44.402 15:53:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:44.402 Malloc2 00:11:44.660 15:53:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:44.660 15:53:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:44.917 15:53:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:45.175 15:53:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:45.175 15:53:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:45.175 15:53:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:45.175 15:53:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:45.175 15:53:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:45.175 15:53:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:45.175 [2024-07-15 15:53:12.096301] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:45.175 [2024-07-15 15:53:12.096344] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099622 ] 00:11:45.434 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.434 [2024-07-15 15:53:12.131169] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:45.434 [2024-07-15 15:53:12.139365] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:45.434 [2024-07-15 15:53:12.139392] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8b6cd9d000 00:11:45.434 [2024-07-15 15:53:12.140361] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:45.434 [2024-07-15 15:53:12.141364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:45.434 [2024-07-15 15:53:12.142364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:45.434 [2024-07-15 15:53:12.143368] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:45.434 [2024-07-15 15:53:12.144375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:45.434 [2024-07-15 15:53:12.145381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:45.434 [2024-07-15 15:53:12.146384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:45.434 [2024-07-15 15:53:12.147390] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:45.434 [2024-07-15 15:53:12.148395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:45.434 [2024-07-15 15:53:12.148414] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8b6cd92000 00:11:45.434 [2024-07-15 15:53:12.149531] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:45.434 [2024-07-15 15:53:12.165517] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:45.434 [2024-07-15 15:53:12.165549] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:45.434 [2024-07-15 15:53:12.170537] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:45.434 [2024-07-15 15:53:12.170589] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:45.434 [2024-07-15 15:53:12.170684] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:45.434 [2024-07-15 15:53:12.170715] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:45.434 [2024-07-15 15:53:12.170727] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:45.434 [2024-07-15 15:53:12.171529] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:45.434 [2024-07-15 15:53:12.171550] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:45.434 [2024-07-15 15:53:12.171563] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:45.434 [2024-07-15 15:53:12.172531] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:45.434 [2024-07-15 15:53:12.172550] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:45.434 [2024-07-15 15:53:12.172564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:45.434 [2024-07-15 15:53:12.173535] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:45.434 [2024-07-15 15:53:12.173555] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:45.434 [2024-07-15 15:53:12.174544] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:45.434 [2024-07-15 15:53:12.174568] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:45.434 [2024-07-15 15:53:12.174578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:45.434 [2024-07-15 15:53:12.174590] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:45.434 [2024-07-15 15:53:12.174700] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:45.434 [2024-07-15 15:53:12.174708] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:45.434 [2024-07-15 15:53:12.174716] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:45.434 [2024-07-15 15:53:12.175553] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:45.434 [2024-07-15 15:53:12.176558] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:45.434 [2024-07-15 15:53:12.177565] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:45.434 [2024-07-15 15:53:12.178556] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:45.434 [2024-07-15 15:53:12.178682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:45.434 [2024-07-15 15:53:12.179577] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:45.434 [2024-07-15 15:53:12.179595] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:45.434 [2024-07-15 15:53:12.179605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:45.434 [2024-07-15 15:53:12.179629] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:45.434 [2024-07-15 15:53:12.179643] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:45.434 [2024-07-15 15:53:12.179670] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:45.434 [2024-07-15 15:53:12.179680] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:45.434 [2024-07-15 15:53:12.179701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:45.434 [2024-07-15 15:53:12.179761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:45.434 [2024-07-15 15:53:12.179779] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:45.434 [2024-07-15 15:53:12.179791] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:45.434 [2024-07-15 15:53:12.179799] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:45.434 [2024-07-15 15:53:12.179807] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:45.434 [2024-07-15 15:53:12.179814] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:45.434 [2024-07-15 15:53:12.179826] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:45.434 [2024-07-15 15:53:12.179834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:45.434 [2024-07-15 15:53:12.179847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:45.434 [2024-07-15 15:53:12.179884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:45.434 [2024-07-15 15:53:12.179900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:45.434 [2024-07-15 15:53:12.179935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.434 [2024-07-15 15:53:12.179950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.434 [2024-07-15 15:53:12.179962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.434 [2024-07-15 15:53:12.179974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.434 [2024-07-15 15:53:12.179983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:45.434 [2024-07-15 15:53:12.179999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:45.434 [2024-07-15 15:53:12.180014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:45.434 [2024-07-15 15:53:12.180027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:45.434 [2024-07-15 15:53:12.180038] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:45.434 [2024-07-15 15:53:12.180048] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180059] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:45.435 [2024-07-15 15:53:12.180100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:45.435 [2024-07-15 15:53:12.180167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180215] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:45.435 [2024-07-15 15:53:12.180223] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:45.435 [2024-07-15 15:53:12.180233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:45.435 [2024-07-15 15:53:12.180267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:45.435 [2024-07-15 15:53:12.180294] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:45.435 [2024-07-15 15:53:12.180311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180338] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:45.435 [2024-07-15 15:53:12.180346] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:45.435 [2024-07-15 15:53:12.180355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:45.435 [2024-07-15 15:53:12.180380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:45.435 [2024-07-15 15:53:12.180402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180417] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180429] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:45.435 [2024-07-15 15:53:12.180438] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:45.435 [2024-07-15 15:53:12.180447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:45.435 [2024-07-15 15:53:12.180458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:45.435 [2024-07-15 15:53:12.180472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180483] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180508] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180524] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180533] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:45.435 [2024-07-15 15:53:12.180540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:45.435 [2024-07-15 15:53:12.180548] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:45.435 [2024-07-15 15:53:12.180576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:45.435 [2024-07-15 15:53:12.180593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:45.435 [2024-07-15 15:53:12.180612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:45.435 [2024-07-15 15:53:12.180627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:45.435 [2024-07-15 15:53:12.180645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:45.435 [2024-07-15 15:53:12.180656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:45.435 [2024-07-15 15:53:12.180672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:45.435 [2024-07-15 15:53:12.180683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:45.435 [2024-07-15 15:53:12.180705] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:45.435 [2024-07-15 15:53:12.180716] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:45.435 [2024-07-15 15:53:12.180722] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:45.435 [2024-07-15 15:53:12.180727] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:45.435 [2024-07-15 15:53:12.180736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:45.435 [2024-07-15 15:53:12.180748] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:45.435 [2024-07-15 15:53:12.180756] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:45.435 [2024-07-15 15:53:12.180764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:45.435 [2024-07-15 15:53:12.180775] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:45.435 [2024-07-15 15:53:12.180783] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:45.435 [2024-07-15 15:53:12.180791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:45.435 [2024-07-15 15:53:12.180803] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:45.435 [2024-07-15 15:53:12.180811] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:45.435 [2024-07-15 15:53:12.180820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:45.435 [2024-07-15 15:53:12.180831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:45.435 [2024-07-15 15:53:12.180850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:45.435 [2024-07-15 15:53:12.180894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:45.435 [2024-07-15 15:53:12.180909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:45.435 ===================================================== 00:11:45.435 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:45.435 ===================================================== 00:11:45.435 Controller Capabilities/Features 00:11:45.435 ================================ 00:11:45.435 Vendor ID: 4e58 00:11:45.435 Subsystem Vendor ID: 4e58 00:11:45.435 Serial Number: SPDK1 00:11:45.435 Model Number: SPDK bdev Controller 00:11:45.435 Firmware Version: 24.09 00:11:45.435 Recommended Arb Burst: 6 00:11:45.435 IEEE OUI Identifier: 8d 6b 50 00:11:45.435 Multi-path I/O 00:11:45.435 May have multiple subsystem ports: Yes 00:11:45.435 May have multiple controllers: Yes 00:11:45.435 Associated with SR-IOV VF: No 00:11:45.435 Max Data Transfer Size: 131072 00:11:45.435 Max Number of Namespaces: 32 00:11:45.435 Max Number of I/O Queues: 127 00:11:45.435 NVMe Specification Version (VS): 1.3 00:11:45.435 NVMe Specification Version (Identify): 1.3 00:11:45.435 Maximum Queue Entries: 256 00:11:45.435 Contiguous Queues Required: Yes 00:11:45.435 Arbitration Mechanisms Supported 00:11:45.435 Weighted Round Robin: Not Supported 00:11:45.435 Vendor Specific: Not Supported 00:11:45.435 Reset Timeout: 15000 ms 00:11:45.435 Doorbell Stride: 4 bytes 00:11:45.435 NVM Subsystem Reset: Not Supported 00:11:45.435 Command Sets Supported 00:11:45.435 NVM Command Set: Supported 00:11:45.435 Boot Partition: Not Supported 00:11:45.435 Memory Page Size Minimum: 4096 bytes 00:11:45.435 Memory Page Size Maximum: 4096 bytes 00:11:45.435 Persistent Memory Region: Not Supported 00:11:45.435 Optional Asynchronous Events Supported 00:11:45.435 Namespace Attribute Notices: Supported 00:11:45.435 Firmware Activation Notices: Not Supported 00:11:45.435 ANA Change Notices: Not Supported 00:11:45.435 PLE Aggregate Log Change Notices: Not Supported 00:11:45.435 LBA Status Info Alert Notices: Not Supported 00:11:45.435 EGE Aggregate Log Change Notices: Not Supported 00:11:45.435 Normal NVM Subsystem Shutdown event: Not Supported 00:11:45.435 Zone Descriptor Change Notices: Not Supported 00:11:45.435 Discovery Log Change Notices: Not Supported 00:11:45.435 Controller Attributes 00:11:45.435 128-bit Host Identifier: Supported 00:11:45.435 Non-Operational Permissive Mode: Not Supported 00:11:45.435 NVM Sets: Not Supported 00:11:45.435 Read Recovery Levels: Not Supported 00:11:45.435 Endurance Groups: Not Supported 00:11:45.435 Predictable Latency Mode: Not Supported 00:11:45.435 Traffic Based Keep ALive: Not Supported 00:11:45.435 Namespace Granularity: Not Supported 00:11:45.435 SQ Associations: Not Supported 00:11:45.435 UUID List: Not Supported 00:11:45.435 Multi-Domain Subsystem: Not Supported 00:11:45.435 Fixed Capacity Management: Not Supported 00:11:45.435 Variable Capacity Management: Not Supported 00:11:45.435 Delete Endurance Group: Not Supported 00:11:45.435 Delete NVM Set: Not Supported 00:11:45.435 Extended LBA Formats Supported: Not Supported 00:11:45.435 Flexible Data Placement Supported: Not Supported 00:11:45.435 00:11:45.435 Controller Memory Buffer Support 00:11:45.435 ================================ 00:11:45.435 Supported: No 00:11:45.435 00:11:45.435 Persistent Memory Region Support 00:11:45.435 ================================ 00:11:45.435 Supported: No 00:11:45.435 00:11:45.436 Admin Command Set Attributes 00:11:45.436 ============================ 00:11:45.436 Security Send/Receive: Not Supported 00:11:45.436 Format NVM: Not Supported 00:11:45.436 Firmware Activate/Download: Not Supported 00:11:45.436 Namespace Management: Not Supported 00:11:45.436 Device Self-Test: Not Supported 00:11:45.436 Directives: Not Supported 00:11:45.436 NVMe-MI: Not Supported 00:11:45.436 Virtualization Management: Not Supported 00:11:45.436 Doorbell Buffer Config: Not Supported 00:11:45.436 Get LBA Status Capability: Not Supported 00:11:45.436 Command & Feature Lockdown Capability: Not Supported 00:11:45.436 Abort Command Limit: 4 00:11:45.436 Async Event Request Limit: 4 00:11:45.436 Number of Firmware Slots: N/A 00:11:45.436 Firmware Slot 1 Read-Only: N/A 00:11:45.436 Firmware Activation Without Reset: N/A 00:11:45.436 Multiple Update Detection Support: N/A 00:11:45.436 Firmware Update Granularity: No Information Provided 00:11:45.436 Per-Namespace SMART Log: No 00:11:45.436 Asymmetric Namespace Access Log Page: Not Supported 00:11:45.436 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:45.436 Command Effects Log Page: Supported 00:11:45.436 Get Log Page Extended Data: Supported 00:11:45.436 Telemetry Log Pages: Not Supported 00:11:45.436 Persistent Event Log Pages: Not Supported 00:11:45.436 Supported Log Pages Log Page: May Support 00:11:45.436 Commands Supported & Effects Log Page: Not Supported 00:11:45.436 Feature Identifiers & Effects Log Page:May Support 00:11:45.436 NVMe-MI Commands & Effects Log Page: May Support 00:11:45.436 Data Area 4 for Telemetry Log: Not Supported 00:11:45.436 Error Log Page Entries Supported: 128 00:11:45.436 Keep Alive: Supported 00:11:45.436 Keep Alive Granularity: 10000 ms 00:11:45.436 00:11:45.436 NVM Command Set Attributes 00:11:45.436 ========================== 00:11:45.436 Submission Queue Entry Size 00:11:45.436 Max: 64 00:11:45.436 Min: 64 00:11:45.436 Completion Queue Entry Size 00:11:45.436 Max: 16 00:11:45.436 Min: 16 00:11:45.436 Number of Namespaces: 32 00:11:45.436 Compare Command: Supported 00:11:45.436 Write Uncorrectable Command: Not Supported 00:11:45.436 Dataset Management Command: Supported 00:11:45.436 Write Zeroes Command: Supported 00:11:45.436 Set Features Save Field: Not Supported 00:11:45.436 Reservations: Not Supported 00:11:45.436 Timestamp: Not Supported 00:11:45.436 Copy: Supported 00:11:45.436 Volatile Write Cache: Present 00:11:45.436 Atomic Write Unit (Normal): 1 00:11:45.436 Atomic Write Unit (PFail): 1 00:11:45.436 Atomic Compare & Write Unit: 1 00:11:45.436 Fused Compare & Write: Supported 00:11:45.436 Scatter-Gather List 00:11:45.436 SGL Command Set: Supported (Dword aligned) 00:11:45.436 SGL Keyed: Not Supported 00:11:45.436 SGL Bit Bucket Descriptor: Not Supported 00:11:45.436 SGL Metadata Pointer: Not Supported 00:11:45.436 Oversized SGL: Not Supported 00:11:45.436 SGL Metadata Address: Not Supported 00:11:45.436 SGL Offset: Not Supported 00:11:45.436 Transport SGL Data Block: Not Supported 00:11:45.436 Replay Protected Memory Block: Not Supported 00:11:45.436 00:11:45.436 Firmware Slot Information 00:11:45.436 ========================= 00:11:45.436 Active slot: 1 00:11:45.436 Slot 1 Firmware Revision: 24.09 00:11:45.436 00:11:45.436 00:11:45.436 Commands Supported and Effects 00:11:45.436 ============================== 00:11:45.436 Admin Commands 00:11:45.436 -------------- 00:11:45.436 Get Log Page (02h): Supported 00:11:45.436 Identify (06h): Supported 00:11:45.436 Abort (08h): Supported 00:11:45.436 Set Features (09h): Supported 00:11:45.436 Get Features (0Ah): Supported 00:11:45.436 Asynchronous Event Request (0Ch): Supported 00:11:45.436 Keep Alive (18h): Supported 00:11:45.436 I/O Commands 00:11:45.436 ------------ 00:11:45.436 Flush (00h): Supported LBA-Change 00:11:45.436 Write (01h): Supported LBA-Change 00:11:45.436 Read (02h): Supported 00:11:45.436 Compare (05h): Supported 00:11:45.436 Write Zeroes (08h): Supported LBA-Change 00:11:45.436 Dataset Management (09h): Supported LBA-Change 00:11:45.436 Copy (19h): Supported LBA-Change 00:11:45.436 00:11:45.436 Error Log 00:11:45.436 ========= 00:11:45.436 00:11:45.436 Arbitration 00:11:45.436 =========== 00:11:45.436 Arbitration Burst: 1 00:11:45.436 00:11:45.436 Power Management 00:11:45.436 ================ 00:11:45.436 Number of Power States: 1 00:11:45.436 Current Power State: Power State #0 00:11:45.436 Power State #0: 00:11:45.436 Max Power: 0.00 W 00:11:45.436 Non-Operational State: Operational 00:11:45.436 Entry Latency: Not Reported 00:11:45.436 Exit Latency: Not Reported 00:11:45.436 Relative Read Throughput: 0 00:11:45.436 Relative Read Latency: 0 00:11:45.436 Relative Write Throughput: 0 00:11:45.436 Relative Write Latency: 0 00:11:45.436 Idle Power: Not Reported 00:11:45.436 Active Power: Not Reported 00:11:45.436 Non-Operational Permissive Mode: Not Supported 00:11:45.436 00:11:45.436 Health Information 00:11:45.436 ================== 00:11:45.436 Critical Warnings: 00:11:45.436 Available Spare Space: OK 00:11:45.436 Temperature: OK 00:11:45.436 Device Reliability: OK 00:11:45.436 Read Only: No 00:11:45.436 Volatile Memory Backup: OK 00:11:45.436 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:45.436 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:45.436 Available Spare: 0% 00:11:45.436 Available Sp[2024-07-15 15:53:12.181032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:45.436 [2024-07-15 15:53:12.181049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:45.436 [2024-07-15 15:53:12.181094] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:45.436 [2024-07-15 15:53:12.181113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.436 [2024-07-15 15:53:12.181125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.436 [2024-07-15 15:53:12.181139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.436 [2024-07-15 15:53:12.181150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.436 [2024-07-15 15:53:12.184888] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:45.436 [2024-07-15 15:53:12.184912] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:45.436 [2024-07-15 15:53:12.185600] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:45.436 [2024-07-15 15:53:12.185692] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:45.436 [2024-07-15 15:53:12.185708] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:45.436 [2024-07-15 15:53:12.186607] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:45.436 [2024-07-15 15:53:12.186632] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:45.436 [2024-07-15 15:53:12.186692] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:45.436 [2024-07-15 15:53:12.188649] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:45.436 are Threshold: 0% 00:11:45.436 Life Percentage Used: 0% 00:11:45.436 Data Units Read: 0 00:11:45.436 Data Units Written: 0 00:11:45.436 Host Read Commands: 0 00:11:45.436 Host Write Commands: 0 00:11:45.436 Controller Busy Time: 0 minutes 00:11:45.436 Power Cycles: 0 00:11:45.436 Power On Hours: 0 hours 00:11:45.436 Unsafe Shutdowns: 0 00:11:45.436 Unrecoverable Media Errors: 0 00:11:45.436 Lifetime Error Log Entries: 0 00:11:45.436 Warning Temperature Time: 0 minutes 00:11:45.436 Critical Temperature Time: 0 minutes 00:11:45.436 00:11:45.436 Number of Queues 00:11:45.436 ================ 00:11:45.436 Number of I/O Submission Queues: 127 00:11:45.436 Number of I/O Completion Queues: 127 00:11:45.436 00:11:45.436 Active Namespaces 00:11:45.436 ================= 00:11:45.436 Namespace ID:1 00:11:45.436 Error Recovery Timeout: Unlimited 00:11:45.436 Command Set Identifier: NVM (00h) 00:11:45.436 Deallocate: Supported 00:11:45.436 Deallocated/Unwritten Error: Not Supported 00:11:45.436 Deallocated Read Value: Unknown 00:11:45.436 Deallocate in Write Zeroes: Not Supported 00:11:45.436 Deallocated Guard Field: 0xFFFF 00:11:45.436 Flush: Supported 00:11:45.436 Reservation: Supported 00:11:45.436 Namespace Sharing Capabilities: Multiple Controllers 00:11:45.436 Size (in LBAs): 131072 (0GiB) 00:11:45.436 Capacity (in LBAs): 131072 (0GiB) 00:11:45.436 Utilization (in LBAs): 131072 (0GiB) 00:11:45.436 NGUID: D9B14EB9A3B541ACB2C0A24C2371A929 00:11:45.436 UUID: d9b14eb9-a3b5-41ac-b2c0-a24c2371a929 00:11:45.436 Thin Provisioning: Not Supported 00:11:45.437 Per-NS Atomic Units: Yes 00:11:45.437 Atomic Boundary Size (Normal): 0 00:11:45.437 Atomic Boundary Size (PFail): 0 00:11:45.437 Atomic Boundary Offset: 0 00:11:45.437 Maximum Single Source Range Length: 65535 00:11:45.437 Maximum Copy Length: 65535 00:11:45.437 Maximum Source Range Count: 1 00:11:45.437 NGUID/EUI64 Never Reused: No 00:11:45.437 Namespace Write Protected: No 00:11:45.437 Number of LBA Formats: 1 00:11:45.437 Current LBA Format: LBA Format #00 00:11:45.437 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.437 00:11:45.437 15:53:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:45.437 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.695 [2024-07-15 15:53:12.417686] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:50.954 Initializing NVMe Controllers 00:11:50.954 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:50.954 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:50.954 Initialization complete. Launching workers. 00:11:50.954 ======================================================== 00:11:50.954 Latency(us) 00:11:50.954 Device Information : IOPS MiB/s Average min max 00:11:50.954 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33585.20 131.19 3812.29 1185.51 9043.09 00:11:50.954 ======================================================== 00:11:50.954 Total : 33585.20 131.19 3812.29 1185.51 9043.09 00:11:50.954 00:11:50.954 [2024-07-15 15:53:17.439416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:50.954 15:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:50.954 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.954 [2024-07-15 15:53:17.680578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:56.227 Initializing NVMe Controllers 00:11:56.227 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:56.227 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:56.227 Initialization complete. Launching workers. 00:11:56.227 ======================================================== 00:11:56.227 Latency(us) 00:11:56.227 Device Information : IOPS MiB/s Average min max 00:11:56.227 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15889.00 62.07 8064.33 4960.02 15970.59 00:11:56.227 ======================================================== 00:11:56.227 Total : 15889.00 62.07 8064.33 4960.02 15970.59 00:11:56.227 00:11:56.227 [2024-07-15 15:53:22.717123] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:56.227 15:53:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:56.227 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.227 [2024-07-15 15:53:22.924184] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:01.496 [2024-07-15 15:53:27.994245] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:01.496 Initializing NVMe Controllers 00:12:01.496 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:01.496 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:01.496 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:01.496 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:01.496 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:01.496 Initialization complete. Launching workers. 00:12:01.496 Starting thread on core 2 00:12:01.496 Starting thread on core 3 00:12:01.496 Starting thread on core 1 00:12:01.496 15:53:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:01.496 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.496 [2024-07-15 15:53:28.310578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:04.786 [2024-07-15 15:53:31.379181] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:04.786 Initializing NVMe Controllers 00:12:04.786 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.786 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.786 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:04.786 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:04.786 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:04.786 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:04.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:04.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:04.786 Initialization complete. Launching workers. 00:12:04.786 Starting thread on core 1 with urgent priority queue 00:12:04.786 Starting thread on core 2 with urgent priority queue 00:12:04.786 Starting thread on core 3 with urgent priority queue 00:12:04.786 Starting thread on core 0 with urgent priority queue 00:12:04.786 SPDK bdev Controller (SPDK1 ) core 0: 5584.00 IO/s 17.91 secs/100000 ios 00:12:04.786 SPDK bdev Controller (SPDK1 ) core 1: 5213.00 IO/s 19.18 secs/100000 ios 00:12:04.786 SPDK bdev Controller (SPDK1 ) core 2: 5992.00 IO/s 16.69 secs/100000 ios 00:12:04.786 SPDK bdev Controller (SPDK1 ) core 3: 6145.33 IO/s 16.27 secs/100000 ios 00:12:04.786 ======================================================== 00:12:04.786 00:12:04.786 15:53:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:04.786 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.786 [2024-07-15 15:53:31.669394] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:04.786 Initializing NVMe Controllers 00:12:04.786 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.786 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.786 Namespace ID: 1 size: 0GB 00:12:04.786 Initialization complete. 00:12:04.786 INFO: using host memory buffer for IO 00:12:04.786 Hello world! 00:12:04.786 [2024-07-15 15:53:31.703020] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:05.046 15:53:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:05.046 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.305 [2024-07-15 15:53:31.994421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:06.237 Initializing NVMe Controllers 00:12:06.237 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:06.237 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:06.237 Initialization complete. Launching workers. 00:12:06.237 submit (in ns) avg, min, max = 10349.4, 3502.2, 4017511.1 00:12:06.237 complete (in ns) avg, min, max = 26553.2, 2054.4, 5011308.9 00:12:06.237 00:12:06.237 Submit histogram 00:12:06.237 ================ 00:12:06.237 Range in us Cumulative Count 00:12:06.237 3.484 - 3.508: 0.0307% ( 4) 00:12:06.237 3.508 - 3.532: 0.4141% ( 50) 00:12:06.237 3.532 - 3.556: 1.4801% ( 139) 00:12:06.237 3.556 - 3.579: 4.1641% ( 350) 00:12:06.237 3.579 - 3.603: 9.0644% ( 639) 00:12:06.237 3.603 - 3.627: 16.2347% ( 935) 00:12:06.237 3.627 - 3.650: 24.1564% ( 1033) 00:12:06.237 3.650 - 3.674: 32.4923% ( 1087) 00:12:06.237 3.674 - 3.698: 40.0690% ( 988) 00:12:06.237 3.698 - 3.721: 47.9218% ( 1024) 00:12:06.237 3.721 - 3.745: 53.4739% ( 724) 00:12:06.237 3.745 - 3.769: 57.8298% ( 568) 00:12:06.237 3.769 - 3.793: 61.4340% ( 470) 00:12:06.237 3.793 - 3.816: 65.0844% ( 476) 00:12:06.237 3.816 - 3.840: 68.7730% ( 481) 00:12:06.237 3.840 - 3.864: 72.8834% ( 536) 00:12:06.237 3.864 - 3.887: 76.6948% ( 497) 00:12:06.237 3.887 - 3.911: 80.1840% ( 455) 00:12:06.237 3.911 - 3.935: 83.5583% ( 440) 00:12:06.237 3.935 - 3.959: 86.1580% ( 339) 00:12:06.237 3.959 - 3.982: 87.9601% ( 235) 00:12:06.237 3.982 - 4.006: 89.3252% ( 178) 00:12:06.237 4.006 - 4.030: 90.6825% ( 177) 00:12:06.237 4.030 - 4.053: 91.9248% ( 162) 00:12:06.237 4.053 - 4.077: 93.1595% ( 161) 00:12:06.237 4.077 - 4.101: 93.8880% ( 95) 00:12:06.237 4.101 - 4.124: 94.5552% ( 87) 00:12:06.237 4.124 - 4.148: 95.1227% ( 74) 00:12:06.237 4.148 - 4.172: 95.6212% ( 65) 00:12:06.237 4.172 - 4.196: 95.9279% ( 40) 00:12:06.237 4.196 - 4.219: 96.2193% ( 38) 00:12:06.237 4.219 - 4.243: 96.3344% ( 15) 00:12:06.237 4.243 - 4.267: 96.4801% ( 19) 00:12:06.238 4.267 - 4.290: 96.6334% ( 20) 00:12:06.238 4.290 - 4.314: 96.7331% ( 13) 00:12:06.238 4.314 - 4.338: 96.8328% ( 13) 00:12:06.238 4.338 - 4.361: 96.9709% ( 18) 00:12:06.238 4.361 - 4.385: 97.0782% ( 14) 00:12:06.238 4.385 - 4.409: 97.1626% ( 11) 00:12:06.238 4.409 - 4.433: 97.2163% ( 7) 00:12:06.238 4.433 - 4.456: 97.2239% ( 1) 00:12:06.238 4.456 - 4.480: 97.2623% ( 5) 00:12:06.238 4.480 - 4.504: 97.2776% ( 2) 00:12:06.238 4.504 - 4.527: 97.3006% ( 3) 00:12:06.238 4.527 - 4.551: 97.3236% ( 3) 00:12:06.238 4.551 - 4.575: 97.3313% ( 1) 00:12:06.238 4.575 - 4.599: 97.3466% ( 2) 00:12:06.238 4.599 - 4.622: 97.3620% ( 2) 00:12:06.238 4.646 - 4.670: 97.3773% ( 2) 00:12:06.238 4.693 - 4.717: 97.4233% ( 6) 00:12:06.238 4.717 - 4.741: 97.4310% ( 1) 00:12:06.238 4.741 - 4.764: 97.4847% ( 7) 00:12:06.238 4.764 - 4.788: 97.5000% ( 2) 00:12:06.238 4.788 - 4.812: 97.5537% ( 7) 00:12:06.238 4.812 - 4.836: 97.5613% ( 1) 00:12:06.238 4.836 - 4.859: 97.5844% ( 3) 00:12:06.238 4.859 - 4.883: 97.6227% ( 5) 00:12:06.238 4.883 - 4.907: 97.6457% ( 3) 00:12:06.238 4.907 - 4.930: 97.6840% ( 5) 00:12:06.238 4.930 - 4.954: 97.6994% ( 2) 00:12:06.238 4.954 - 4.978: 97.7531% ( 7) 00:12:06.238 4.978 - 5.001: 97.7991% ( 6) 00:12:06.238 5.001 - 5.025: 97.8221% ( 3) 00:12:06.238 5.025 - 5.049: 97.8758% ( 7) 00:12:06.238 5.049 - 5.073: 97.8911% ( 2) 00:12:06.238 5.073 - 5.096: 97.9294% ( 5) 00:12:06.238 5.096 - 5.120: 97.9371% ( 1) 00:12:06.238 5.120 - 5.144: 97.9525% ( 2) 00:12:06.238 5.144 - 5.167: 97.9755% ( 3) 00:12:06.238 5.167 - 5.191: 97.9985% ( 3) 00:12:06.238 5.191 - 5.215: 98.0215% ( 3) 00:12:06.238 5.215 - 5.239: 98.0291% ( 1) 00:12:06.238 5.286 - 5.310: 98.0368% ( 1) 00:12:06.238 5.310 - 5.333: 98.0521% ( 2) 00:12:06.238 5.333 - 5.357: 98.0598% ( 1) 00:12:06.238 5.357 - 5.381: 98.0675% ( 1) 00:12:06.238 5.428 - 5.452: 98.0828% ( 2) 00:12:06.238 5.499 - 5.523: 98.0905% ( 1) 00:12:06.238 5.855 - 5.879: 98.0982% ( 1) 00:12:06.238 5.879 - 5.902: 98.1058% ( 1) 00:12:06.238 5.902 - 5.926: 98.1135% ( 1) 00:12:06.238 6.400 - 6.447: 98.1212% ( 1) 00:12:06.238 6.495 - 6.542: 98.1288% ( 1) 00:12:06.238 6.542 - 6.590: 98.1365% ( 1) 00:12:06.238 6.590 - 6.637: 98.1442% ( 1) 00:12:06.238 6.637 - 6.684: 98.1518% ( 1) 00:12:06.238 6.874 - 6.921: 98.1672% ( 2) 00:12:06.238 6.921 - 6.969: 98.1748% ( 1) 00:12:06.238 6.969 - 7.016: 98.1825% ( 1) 00:12:06.238 7.064 - 7.111: 98.1979% ( 2) 00:12:06.238 7.159 - 7.206: 98.2209% ( 3) 00:12:06.238 7.301 - 7.348: 98.2285% ( 1) 00:12:06.238 7.348 - 7.396: 98.2515% ( 3) 00:12:06.238 7.396 - 7.443: 98.2592% ( 1) 00:12:06.238 7.490 - 7.538: 98.2669% ( 1) 00:12:06.238 7.538 - 7.585: 98.2822% ( 2) 00:12:06.238 7.680 - 7.727: 98.2975% ( 2) 00:12:06.238 7.775 - 7.822: 98.3052% ( 1) 00:12:06.238 7.822 - 7.870: 98.3282% ( 3) 00:12:06.238 7.870 - 7.917: 98.3359% ( 1) 00:12:06.238 7.917 - 7.964: 98.3436% ( 1) 00:12:06.238 7.964 - 8.012: 98.3512% ( 1) 00:12:06.238 8.012 - 8.059: 98.3666% ( 2) 00:12:06.238 8.059 - 8.107: 98.3819% ( 2) 00:12:06.238 8.107 - 8.154: 98.4049% ( 3) 00:12:06.238 8.249 - 8.296: 98.4126% ( 1) 00:12:06.238 8.296 - 8.344: 98.4202% ( 1) 00:12:06.238 8.344 - 8.391: 98.4279% ( 1) 00:12:06.238 8.628 - 8.676: 98.4356% ( 1) 00:12:06.238 8.676 - 8.723: 98.4433% ( 1) 00:12:06.238 8.723 - 8.770: 98.4509% ( 1) 00:12:06.238 8.770 - 8.818: 98.4586% ( 1) 00:12:06.238 9.007 - 9.055: 98.4663% ( 1) 00:12:06.238 9.434 - 9.481: 98.4893% ( 3) 00:12:06.238 9.529 - 9.576: 98.4969% ( 1) 00:12:06.238 9.671 - 9.719: 98.5046% ( 1) 00:12:06.238 9.719 - 9.766: 98.5123% ( 1) 00:12:06.238 9.861 - 9.908: 98.5276% ( 2) 00:12:06.238 9.908 - 9.956: 98.5429% ( 2) 00:12:06.238 10.050 - 10.098: 98.5506% ( 1) 00:12:06.238 10.335 - 10.382: 98.5583% ( 1) 00:12:06.238 10.430 - 10.477: 98.5660% ( 1) 00:12:06.238 10.619 - 10.667: 98.5736% ( 1) 00:12:06.238 10.809 - 10.856: 98.5813% ( 1) 00:12:06.238 10.904 - 10.951: 98.5966% ( 2) 00:12:06.238 11.046 - 11.093: 98.6120% ( 2) 00:12:06.238 11.093 - 11.141: 98.6196% ( 1) 00:12:06.238 11.473 - 11.520: 98.6273% ( 1) 00:12:06.238 11.710 - 11.757: 98.6350% ( 1) 00:12:06.238 11.852 - 11.899: 98.6503% ( 2) 00:12:06.238 11.947 - 11.994: 98.6580% ( 1) 00:12:06.238 12.089 - 12.136: 98.6656% ( 1) 00:12:06.238 12.136 - 12.231: 98.6733% ( 1) 00:12:06.238 12.231 - 12.326: 98.6810% ( 1) 00:12:06.238 12.705 - 12.800: 98.6963% ( 2) 00:12:06.238 12.895 - 12.990: 98.7040% ( 1) 00:12:06.238 13.369 - 13.464: 98.7117% ( 1) 00:12:06.238 13.559 - 13.653: 98.7270% ( 2) 00:12:06.238 13.748 - 13.843: 98.7423% ( 2) 00:12:06.238 13.843 - 13.938: 98.7500% ( 1) 00:12:06.238 14.127 - 14.222: 98.7577% ( 1) 00:12:06.238 14.412 - 14.507: 98.7653% ( 1) 00:12:06.238 14.601 - 14.696: 98.7730% ( 1) 00:12:06.238 14.886 - 14.981: 98.7807% ( 1) 00:12:06.238 14.981 - 15.076: 98.7883% ( 1) 00:12:06.238 15.170 - 15.265: 98.7960% ( 1) 00:12:06.238 15.644 - 15.739: 98.8037% ( 1) 00:12:06.238 17.067 - 17.161: 98.8113% ( 1) 00:12:06.238 17.161 - 17.256: 98.8190% ( 1) 00:12:06.238 17.351 - 17.446: 98.8344% ( 2) 00:12:06.238 17.446 - 17.541: 98.8497% ( 2) 00:12:06.238 17.541 - 17.636: 98.8880% ( 5) 00:12:06.238 17.636 - 17.730: 98.8957% ( 1) 00:12:06.238 17.730 - 17.825: 98.9264% ( 4) 00:12:06.238 17.825 - 17.920: 99.0107% ( 11) 00:12:06.238 17.920 - 18.015: 99.0798% ( 9) 00:12:06.238 18.015 - 18.110: 99.1334% ( 7) 00:12:06.238 18.110 - 18.204: 99.1718% ( 5) 00:12:06.238 18.204 - 18.299: 99.2485% ( 10) 00:12:06.238 18.299 - 18.394: 99.3021% ( 7) 00:12:06.238 18.394 - 18.489: 99.4325% ( 17) 00:12:06.238 18.489 - 18.584: 99.4939% ( 8) 00:12:06.238 18.584 - 18.679: 99.5936% ( 13) 00:12:06.238 18.679 - 18.773: 99.6319% ( 5) 00:12:06.238 18.773 - 18.868: 99.6549% ( 3) 00:12:06.238 18.868 - 18.963: 99.6933% ( 5) 00:12:06.238 18.963 - 19.058: 99.7239% ( 4) 00:12:06.238 19.058 - 19.153: 99.7393% ( 2) 00:12:06.238 19.153 - 19.247: 99.7546% ( 2) 00:12:06.238 19.342 - 19.437: 99.7699% ( 2) 00:12:06.238 19.437 - 19.532: 99.7853% ( 2) 00:12:06.238 19.532 - 19.627: 99.8006% ( 2) 00:12:06.238 19.627 - 19.721: 99.8083% ( 1) 00:12:06.238 21.902 - 21.997: 99.8160% ( 1) 00:12:06.238 22.566 - 22.661: 99.8236% ( 1) 00:12:06.238 25.979 - 26.169: 99.8313% ( 1) 00:12:06.238 27.496 - 27.686: 99.8390% ( 1) 00:12:06.238 3106.892 - 3131.164: 99.8466% ( 1) 00:12:06.238 3980.705 - 4004.978: 99.9387% ( 12) 00:12:06.238 4004.978 - 4029.250: 100.0000% ( 8) 00:12:06.238 00:12:06.238 Complete histogram 00:12:06.238 ================== 00:12:06.238 Range in us Cumulative Count 00:12:06.238 2.050 - 2.062: 1.7561% ( 229) 00:12:06.238 2.062 - 2.074: 36.5951% ( 4543) 00:12:06.238 2.074 - 2.086: 42.4540% ( 764) 00:12:06.238 2.086 - 2.098: 47.4770% ( 655) 00:12:06.238 2.098 - 2.110: 58.8574% ( 1484) 00:12:06.238 2.110 - 2.121: 61.0429% ( 285) 00:12:06.238 2.121 - 2.133: 65.9126% ( 635) 00:12:06.238 2.133 - 2.145: 74.1181% ( 1070) 00:12:06.238 2.145 - 2.157: 75.0844% ( 126) 00:12:06.238 2.157 - 2.169: 78.5276% ( 449) 00:12:06.238 2.169 - 2.181: 81.7178% ( 416) 00:12:06.238 2.181 - 2.193: 82.5077% ( 103) 00:12:06.238 2.193 - 2.204: 84.4479% ( 253) 00:12:06.238 2.204 - 2.216: 87.8144% ( 439) 00:12:06.238 2.216 - 2.228: 89.7239% ( 249) 00:12:06.238 2.228 - 2.240: 91.6104% ( 246) 00:12:06.238 2.240 - 2.252: 93.0368% ( 186) 00:12:06.238 2.252 - 2.264: 93.5199% ( 63) 00:12:06.238 2.264 - 2.276: 93.8574% ( 44) 00:12:06.238 2.276 - 2.287: 94.0798% ( 29) 00:12:06.238 2.287 - 2.299: 94.7546% ( 88) 00:12:06.238 2.299 - 2.311: 95.1917% ( 57) 00:12:06.238 2.311 - 2.323: 95.3451% ( 20) 00:12:06.238 2.323 - 2.335: 95.4908% ( 19) 00:12:06.238 2.335 - 2.347: 95.6365% ( 19) 00:12:06.238 2.347 - 2.359: 96.0046% ( 48) 00:12:06.238 2.359 - 2.370: 96.4724% ( 61) 00:12:06.238 2.370 - 2.382: 97.0015% ( 69) 00:12:06.238 2.382 - 2.394: 97.5383% ( 70) 00:12:06.238 2.394 - 2.406: 97.8451% ( 40) 00:12:06.238 2.406 - 2.418: 97.9831% ( 18) 00:12:06.238 2.418 - 2.430: 98.0905% ( 14) 00:12:06.238 2.430 - 2.441: 98.2132% ( 16) 00:12:06.238 2.441 - 2.453: 98.2515% ( 5) 00:12:06.238 2.453 - 2.465: 98.3052% ( 7) 00:12:06.238 2.465 - 2.477: 98.3666% ( 8) 00:12:06.238 2.477 - 2.489: 98.4049% ( 5) 00:12:06.238 2.489 - 2.501: 98.4279% ( 3) 00:12:06.238 2.501 - 2.513: 98.4433% ( 2) 00:12:06.238 2.524 - 2.536: 98.4586% ( 2) 00:12:06.238 2.536 - 2.548: 98.4739% ( 2) 00:12:06.238 2.548 - 2.560: 98.4893% ( 2) 00:12:06.238 2.560 - 2.572: 98.5046% ( 2) 00:12:06.238 2.572 - 2.584: 98.5276% ( 3) 00:12:06.238 2.584 - 2.596: 98.5353% ( 1) 00:12:06.238 2.596 - 2.607: 98.5506% ( 2) 00:12:06.238 2.607 - 2.619: 98.5660% ( 2) 00:12:06.238 2.631 - 2.643: 98.5736% ( 1) 00:12:06.238 2.655 - 2.667: 98.5813% ( 1) 00:12:06.238 2.738 - 2.750: 98.5890% ( 1) 00:12:06.238 2.761 - 2.773: 98.5966% ( 1) 00:12:06.238 2.916 - 2.927: 98.6043% ( 1) 00:12:06.238 2.999 - 3.010: 98.6120% ( 1) 00:12:06.238 3.153 - 3.176: 98.6273% ( 2) 00:12:06.238 3.200 - 3.224: 98.6426% ( 2) 00:12:06.238 3.271 - 3.295: 98.6503% ( 1) 00:12:06.238 3.295 - 3.319: 98.6580% ( 1) 00:12:06.238 3.342 - 3.366: 98.6656% ( 1) 00:12:06.238 3.366 - 3.390: 98.6733% ( 1) 00:12:06.238 3.390 - 3.413: 98.6810% ( 1) 00:12:06.238 3.532 - 3.556: 98.7040% ( 3) 00:12:06.238 3.556 - 3.579: 98.7117% ( 1) 00:12:06.238 3.627 - 3.650: 98.7193% ( 1) 00:12:06.238 3.650 - 3.674: 98.7270% ( 1) 00:12:06.238 3.674 - 3.698: 98.7347% ( 1) 00:12:06.238 3.721 - 3.745: 98.7423% ( 1) 00:12:06.238 3.793 - 3.816: 98.7577% ( 2) 00:12:06.238 5.001 - 5.025: 98.7653% ( 1) 00:12:06.238 5.025 - 5.049: 98.7730% ( 1) 00:12:06.238 5.262 - 5.286: 98.7807% ( 1) 00:12:06.238 5.286 - 5.310: 98.7883% ( 1) 00:12:06.238 5.428 - 5.452: 9[2024-07-15 15:53:33.014632] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:06.238 8.7960% ( 1) 00:12:06.238 5.499 - 5.523: 98.8037% ( 1) 00:12:06.238 5.641 - 5.665: 98.8113% ( 1) 00:12:06.238 5.736 - 5.760: 98.8190% ( 1) 00:12:06.238 5.760 - 5.784: 98.8344% ( 2) 00:12:06.238 5.807 - 5.831: 98.8420% ( 1) 00:12:06.238 5.902 - 5.926: 98.8497% ( 1) 00:12:06.238 5.926 - 5.950: 98.8574% ( 1) 00:12:06.238 5.997 - 6.021: 98.8650% ( 1) 00:12:06.238 6.163 - 6.210: 98.8727% ( 1) 00:12:06.238 6.258 - 6.305: 98.8804% ( 1) 00:12:06.238 6.353 - 6.400: 98.8957% ( 2) 00:12:06.238 6.447 - 6.495: 98.9110% ( 2) 00:12:06.238 6.542 - 6.590: 98.9187% ( 1) 00:12:06.238 6.590 - 6.637: 98.9264% ( 1) 00:12:06.238 6.637 - 6.684: 98.9340% ( 1) 00:12:06.238 6.969 - 7.016: 98.9417% ( 1) 00:12:06.238 8.676 - 8.723: 98.9494% ( 1) 00:12:06.238 15.550 - 15.644: 98.9647% ( 2) 00:12:06.238 15.644 - 15.739: 98.9954% ( 4) 00:12:06.238 15.739 - 15.834: 99.0031% ( 1) 00:12:06.238 15.929 - 16.024: 99.0184% ( 2) 00:12:06.238 16.024 - 16.119: 99.0337% ( 2) 00:12:06.238 16.119 - 16.213: 99.0491% ( 2) 00:12:06.238 16.213 - 16.308: 99.0951% ( 6) 00:12:06.238 16.308 - 16.403: 99.1104% ( 2) 00:12:06.238 16.403 - 16.498: 99.1488% ( 5) 00:12:06.238 16.498 - 16.593: 99.1718% ( 3) 00:12:06.238 16.593 - 16.687: 99.2178% ( 6) 00:12:06.238 16.687 - 16.782: 99.2408% ( 3) 00:12:06.238 16.782 - 16.877: 99.2561% ( 2) 00:12:06.238 16.877 - 16.972: 99.2791% ( 3) 00:12:06.238 16.972 - 17.067: 99.2868% ( 1) 00:12:06.238 17.067 - 17.161: 99.2945% ( 1) 00:12:06.238 17.161 - 17.256: 99.3098% ( 2) 00:12:06.238 17.256 - 17.351: 99.3328% ( 3) 00:12:06.238 17.351 - 17.446: 99.3405% ( 1) 00:12:06.238 17.446 - 17.541: 99.3482% ( 1) 00:12:06.238 17.541 - 17.636: 99.3635% ( 2) 00:12:06.238 17.636 - 17.730: 99.3712% ( 1) 00:12:06.238 17.730 - 17.825: 99.3788% ( 1) 00:12:06.238 18.679 - 18.773: 99.3865% ( 1) 00:12:06.238 1626.264 - 1638.400: 99.3942% ( 1) 00:12:06.238 3009.801 - 3021.938: 99.4018% ( 1) 00:12:06.238 3859.342 - 3883.615: 99.4095% ( 1) 00:12:06.238 3980.705 - 4004.978: 99.8313% ( 55) 00:12:06.238 4004.978 - 4029.250: 99.9923% ( 21) 00:12:06.238 5000.154 - 5024.427: 100.0000% ( 1) 00:12:06.238 00:12:06.238 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:06.238 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:06.238 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:06.238 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:06.238 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:06.495 [ 00:12:06.495 { 00:12:06.495 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:06.495 "subtype": "Discovery", 00:12:06.495 "listen_addresses": [], 00:12:06.495 "allow_any_host": true, 00:12:06.495 "hosts": [] 00:12:06.495 }, 00:12:06.495 { 00:12:06.495 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:06.495 "subtype": "NVMe", 00:12:06.495 "listen_addresses": [ 00:12:06.495 { 00:12:06.495 "trtype": "VFIOUSER", 00:12:06.495 "adrfam": "IPv4", 00:12:06.495 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:06.495 "trsvcid": "0" 00:12:06.495 } 00:12:06.495 ], 00:12:06.495 "allow_any_host": true, 00:12:06.495 "hosts": [], 00:12:06.495 "serial_number": "SPDK1", 00:12:06.495 "model_number": "SPDK bdev Controller", 00:12:06.495 "max_namespaces": 32, 00:12:06.495 "min_cntlid": 1, 00:12:06.495 "max_cntlid": 65519, 00:12:06.495 "namespaces": [ 00:12:06.495 { 00:12:06.495 "nsid": 1, 00:12:06.495 "bdev_name": "Malloc1", 00:12:06.495 "name": "Malloc1", 00:12:06.495 "nguid": "D9B14EB9A3B541ACB2C0A24C2371A929", 00:12:06.495 "uuid": "d9b14eb9-a3b5-41ac-b2c0-a24c2371a929" 00:12:06.495 } 00:12:06.495 ] 00:12:06.495 }, 00:12:06.495 { 00:12:06.495 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:06.495 "subtype": "NVMe", 00:12:06.495 "listen_addresses": [ 00:12:06.495 { 00:12:06.495 "trtype": "VFIOUSER", 00:12:06.495 "adrfam": "IPv4", 00:12:06.495 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:06.495 "trsvcid": "0" 00:12:06.495 } 00:12:06.495 ], 00:12:06.495 "allow_any_host": true, 00:12:06.495 "hosts": [], 00:12:06.495 "serial_number": "SPDK2", 00:12:06.495 "model_number": "SPDK bdev Controller", 00:12:06.495 "max_namespaces": 32, 00:12:06.495 "min_cntlid": 1, 00:12:06.495 "max_cntlid": 65519, 00:12:06.495 "namespaces": [ 00:12:06.495 { 00:12:06.495 "nsid": 1, 00:12:06.495 "bdev_name": "Malloc2", 00:12:06.495 "name": "Malloc2", 00:12:06.495 "nguid": "C462BDAEF7A34E90A6490C8F6B6E0050", 00:12:06.495 "uuid": "c462bdae-f7a3-4e90-a649-0c8f6b6e0050" 00:12:06.495 } 00:12:06.495 ] 00:12:06.495 } 00:12:06.495 ] 00:12:06.495 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:06.495 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1102032 00:12:06.495 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:06.495 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:06.495 15:53:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:06.495 15:53:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:06.495 15:53:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:06.495 15:53:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:06.495 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:06.495 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:06.495 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.752 [2024-07-15 15:53:33.456337] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:06.752 Malloc3 00:12:06.752 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:07.009 [2024-07-15 15:53:33.819004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:07.009 15:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:07.009 Asynchronous Event Request test 00:12:07.009 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:07.009 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:07.009 Registering asynchronous event callbacks... 00:12:07.009 Starting namespace attribute notice tests for all controllers... 00:12:07.009 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:07.009 aer_cb - Changed Namespace 00:12:07.009 Cleaning up... 00:12:07.266 [ 00:12:07.266 { 00:12:07.266 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:07.266 "subtype": "Discovery", 00:12:07.266 "listen_addresses": [], 00:12:07.266 "allow_any_host": true, 00:12:07.266 "hosts": [] 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:07.266 "subtype": "NVMe", 00:12:07.266 "listen_addresses": [ 00:12:07.266 { 00:12:07.266 "trtype": "VFIOUSER", 00:12:07.266 "adrfam": "IPv4", 00:12:07.266 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:07.266 "trsvcid": "0" 00:12:07.266 } 00:12:07.266 ], 00:12:07.266 "allow_any_host": true, 00:12:07.266 "hosts": [], 00:12:07.266 "serial_number": "SPDK1", 00:12:07.266 "model_number": "SPDK bdev Controller", 00:12:07.266 "max_namespaces": 32, 00:12:07.266 "min_cntlid": 1, 00:12:07.266 "max_cntlid": 65519, 00:12:07.266 "namespaces": [ 00:12:07.266 { 00:12:07.266 "nsid": 1, 00:12:07.266 "bdev_name": "Malloc1", 00:12:07.266 "name": "Malloc1", 00:12:07.266 "nguid": "D9B14EB9A3B541ACB2C0A24C2371A929", 00:12:07.266 "uuid": "d9b14eb9-a3b5-41ac-b2c0-a24c2371a929" 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "nsid": 2, 00:12:07.266 "bdev_name": "Malloc3", 00:12:07.266 "name": "Malloc3", 00:12:07.266 "nguid": "0375881A7FC448F5BB9E5427EB61F74C", 00:12:07.266 "uuid": "0375881a-7fc4-48f5-bb9e-5427eb61f74c" 00:12:07.266 } 00:12:07.266 ] 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:07.266 "subtype": "NVMe", 00:12:07.266 "listen_addresses": [ 00:12:07.266 { 00:12:07.266 "trtype": "VFIOUSER", 00:12:07.266 "adrfam": "IPv4", 00:12:07.266 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:07.266 "trsvcid": "0" 00:12:07.266 } 00:12:07.266 ], 00:12:07.266 "allow_any_host": true, 00:12:07.266 "hosts": [], 00:12:07.266 "serial_number": "SPDK2", 00:12:07.266 "model_number": "SPDK bdev Controller", 00:12:07.266 "max_namespaces": 32, 00:12:07.266 "min_cntlid": 1, 00:12:07.266 "max_cntlid": 65519, 00:12:07.266 "namespaces": [ 00:12:07.266 { 00:12:07.266 "nsid": 1, 00:12:07.266 "bdev_name": "Malloc2", 00:12:07.266 "name": "Malloc2", 00:12:07.266 "nguid": "C462BDAEF7A34E90A6490C8F6B6E0050", 00:12:07.266 "uuid": "c462bdae-f7a3-4e90-a649-0c8f6b6e0050" 00:12:07.266 } 00:12:07.266 ] 00:12:07.266 } 00:12:07.266 ] 00:12:07.266 15:53:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1102032 00:12:07.266 15:53:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:07.266 15:53:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:07.266 15:53:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:07.266 15:53:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:07.266 [2024-07-15 15:53:34.137128] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:07.266 [2024-07-15 15:53:34.137189] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1102167 ] 00:12:07.266 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.266 [2024-07-15 15:53:34.169013] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:07.266 [2024-07-15 15:53:34.178146] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:07.266 [2024-07-15 15:53:34.178191] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f681b7da000 00:12:07.266 [2024-07-15 15:53:34.179145] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.266 [2024-07-15 15:53:34.180149] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.266 [2024-07-15 15:53:34.181174] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.266 [2024-07-15 15:53:34.182168] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.266 [2024-07-15 15:53:34.183208] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.266 [2024-07-15 15:53:34.184207] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.266 [2024-07-15 15:53:34.185194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.266 [2024-07-15 15:53:34.186207] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.266 [2024-07-15 15:53:34.187218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:07.266 [2024-07-15 15:53:34.187239] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f681b7cf000 00:12:07.266 [2024-07-15 15:53:34.188352] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:07.525 [2024-07-15 15:53:34.203375] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:07.526 [2024-07-15 15:53:34.203411] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:07.526 [2024-07-15 15:53:34.205488] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:07.526 [2024-07-15 15:53:34.205540] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:07.526 [2024-07-15 15:53:34.205623] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:07.526 [2024-07-15 15:53:34.205648] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:07.526 [2024-07-15 15:53:34.205658] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:07.526 [2024-07-15 15:53:34.206490] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:07.526 [2024-07-15 15:53:34.206511] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:07.526 [2024-07-15 15:53:34.206524] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:07.526 [2024-07-15 15:53:34.210888] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:07.526 [2024-07-15 15:53:34.210933] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:07.526 [2024-07-15 15:53:34.210950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:07.526 [2024-07-15 15:53:34.211528] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:07.526 [2024-07-15 15:53:34.211549] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:07.526 [2024-07-15 15:53:34.212540] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:07.526 [2024-07-15 15:53:34.212561] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:07.526 [2024-07-15 15:53:34.212571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:07.526 [2024-07-15 15:53:34.212583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:07.526 [2024-07-15 15:53:34.212692] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:07.526 [2024-07-15 15:53:34.212700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:07.526 [2024-07-15 15:53:34.212709] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:07.526 [2024-07-15 15:53:34.213556] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:07.526 [2024-07-15 15:53:34.214566] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:07.526 [2024-07-15 15:53:34.215571] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:07.526 [2024-07-15 15:53:34.216567] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:07.526 [2024-07-15 15:53:34.216648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:07.526 [2024-07-15 15:53:34.217588] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:07.526 [2024-07-15 15:53:34.217608] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:07.526 [2024-07-15 15:53:34.217617] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.217641] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:07.526 [2024-07-15 15:53:34.217658] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.217683] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.526 [2024-07-15 15:53:34.217692] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.526 [2024-07-15 15:53:34.217711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.526 [2024-07-15 15:53:34.221898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:07.526 [2024-07-15 15:53:34.221923] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:07.526 [2024-07-15 15:53:34.221937] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:07.526 [2024-07-15 15:53:34.221946] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:07.526 [2024-07-15 15:53:34.221954] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:07.526 [2024-07-15 15:53:34.221962] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:07.526 [2024-07-15 15:53:34.221970] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:07.526 [2024-07-15 15:53:34.221978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.221992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.222009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:07.526 [2024-07-15 15:53:34.229888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:07.526 [2024-07-15 15:53:34.229917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.526 [2024-07-15 15:53:34.229932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.526 [2024-07-15 15:53:34.229944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.526 [2024-07-15 15:53:34.229956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.526 [2024-07-15 15:53:34.229965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.229982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.229997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:07.526 [2024-07-15 15:53:34.237902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:07.526 [2024-07-15 15:53:34.237921] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:07.526 [2024-07-15 15:53:34.237930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.237942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.237952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.237966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:07.526 [2024-07-15 15:53:34.245887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:07.526 [2024-07-15 15:53:34.245978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.245997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.246011] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:07.526 [2024-07-15 15:53:34.246020] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:07.526 [2024-07-15 15:53:34.246031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:07.526 [2024-07-15 15:53:34.253886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:07.526 [2024-07-15 15:53:34.253910] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:07.526 [2024-07-15 15:53:34.253926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.253942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.253955] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.526 [2024-07-15 15:53:34.253963] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.526 [2024-07-15 15:53:34.253973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.526 [2024-07-15 15:53:34.261885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:07.526 [2024-07-15 15:53:34.261914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.261931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:07.526 [2024-07-15 15:53:34.261945] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.526 [2024-07-15 15:53:34.261954] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.526 [2024-07-15 15:53:34.261963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.526 [2024-07-15 15:53:34.269889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:07.527 [2024-07-15 15:53:34.269910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:07.527 [2024-07-15 15:53:34.269922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:07.527 [2024-07-15 15:53:34.269938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:07.527 [2024-07-15 15:53:34.269949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:07.527 [2024-07-15 15:53:34.269957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:07.527 [2024-07-15 15:53:34.269966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:07.527 [2024-07-15 15:53:34.269978] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:07.527 [2024-07-15 15:53:34.269986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:07.527 [2024-07-15 15:53:34.269995] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:07.527 [2024-07-15 15:53:34.270021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:07.527 [2024-07-15 15:53:34.277888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:07.527 [2024-07-15 15:53:34.277914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:07.527 [2024-07-15 15:53:34.285888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:07.527 [2024-07-15 15:53:34.285914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:07.527 [2024-07-15 15:53:34.293888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:07.527 [2024-07-15 15:53:34.293913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:07.527 [2024-07-15 15:53:34.301887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:07.527 [2024-07-15 15:53:34.301921] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:07.527 [2024-07-15 15:53:34.301933] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:07.527 [2024-07-15 15:53:34.301939] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:07.527 [2024-07-15 15:53:34.301945] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:07.527 [2024-07-15 15:53:34.301955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:07.527 [2024-07-15 15:53:34.301967] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:07.527 [2024-07-15 15:53:34.301975] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:07.527 [2024-07-15 15:53:34.301984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:07.527 [2024-07-15 15:53:34.301995] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:07.527 [2024-07-15 15:53:34.302002] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.527 [2024-07-15 15:53:34.302011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.527 [2024-07-15 15:53:34.302023] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:07.527 [2024-07-15 15:53:34.302031] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:07.527 [2024-07-15 15:53:34.302039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:07.527 [2024-07-15 15:53:34.309900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:07.527 [2024-07-15 15:53:34.309927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:07.527 [2024-07-15 15:53:34.309945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:07.527 [2024-07-15 15:53:34.309962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:07.527 ===================================================== 00:12:07.527 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:07.527 ===================================================== 00:12:07.527 Controller Capabilities/Features 00:12:07.527 ================================ 00:12:07.527 Vendor ID: 4e58 00:12:07.527 Subsystem Vendor ID: 4e58 00:12:07.527 Serial Number: SPDK2 00:12:07.527 Model Number: SPDK bdev Controller 00:12:07.527 Firmware Version: 24.09 00:12:07.527 Recommended Arb Burst: 6 00:12:07.527 IEEE OUI Identifier: 8d 6b 50 00:12:07.527 Multi-path I/O 00:12:07.527 May have multiple subsystem ports: Yes 00:12:07.527 May have multiple controllers: Yes 00:12:07.527 Associated with SR-IOV VF: No 00:12:07.527 Max Data Transfer Size: 131072 00:12:07.527 Max Number of Namespaces: 32 00:12:07.527 Max Number of I/O Queues: 127 00:12:07.527 NVMe Specification Version (VS): 1.3 00:12:07.527 NVMe Specification Version (Identify): 1.3 00:12:07.527 Maximum Queue Entries: 256 00:12:07.527 Contiguous Queues Required: Yes 00:12:07.527 Arbitration Mechanisms Supported 00:12:07.527 Weighted Round Robin: Not Supported 00:12:07.527 Vendor Specific: Not Supported 00:12:07.527 Reset Timeout: 15000 ms 00:12:07.527 Doorbell Stride: 4 bytes 00:12:07.527 NVM Subsystem Reset: Not Supported 00:12:07.527 Command Sets Supported 00:12:07.527 NVM Command Set: Supported 00:12:07.527 Boot Partition: Not Supported 00:12:07.527 Memory Page Size Minimum: 4096 bytes 00:12:07.527 Memory Page Size Maximum: 4096 bytes 00:12:07.527 Persistent Memory Region: Not Supported 00:12:07.527 Optional Asynchronous Events Supported 00:12:07.527 Namespace Attribute Notices: Supported 00:12:07.527 Firmware Activation Notices: Not Supported 00:12:07.527 ANA Change Notices: Not Supported 00:12:07.527 PLE Aggregate Log Change Notices: Not Supported 00:12:07.527 LBA Status Info Alert Notices: Not Supported 00:12:07.527 EGE Aggregate Log Change Notices: Not Supported 00:12:07.527 Normal NVM Subsystem Shutdown event: Not Supported 00:12:07.527 Zone Descriptor Change Notices: Not Supported 00:12:07.527 Discovery Log Change Notices: Not Supported 00:12:07.527 Controller Attributes 00:12:07.527 128-bit Host Identifier: Supported 00:12:07.527 Non-Operational Permissive Mode: Not Supported 00:12:07.527 NVM Sets: Not Supported 00:12:07.527 Read Recovery Levels: Not Supported 00:12:07.527 Endurance Groups: Not Supported 00:12:07.527 Predictable Latency Mode: Not Supported 00:12:07.527 Traffic Based Keep ALive: Not Supported 00:12:07.527 Namespace Granularity: Not Supported 00:12:07.527 SQ Associations: Not Supported 00:12:07.527 UUID List: Not Supported 00:12:07.527 Multi-Domain Subsystem: Not Supported 00:12:07.527 Fixed Capacity Management: Not Supported 00:12:07.527 Variable Capacity Management: Not Supported 00:12:07.527 Delete Endurance Group: Not Supported 00:12:07.527 Delete NVM Set: Not Supported 00:12:07.527 Extended LBA Formats Supported: Not Supported 00:12:07.527 Flexible Data Placement Supported: Not Supported 00:12:07.527 00:12:07.527 Controller Memory Buffer Support 00:12:07.527 ================================ 00:12:07.527 Supported: No 00:12:07.527 00:12:07.527 Persistent Memory Region Support 00:12:07.527 ================================ 00:12:07.527 Supported: No 00:12:07.527 00:12:07.527 Admin Command Set Attributes 00:12:07.527 ============================ 00:12:07.527 Security Send/Receive: Not Supported 00:12:07.527 Format NVM: Not Supported 00:12:07.527 Firmware Activate/Download: Not Supported 00:12:07.527 Namespace Management: Not Supported 00:12:07.527 Device Self-Test: Not Supported 00:12:07.527 Directives: Not Supported 00:12:07.527 NVMe-MI: Not Supported 00:12:07.527 Virtualization Management: Not Supported 00:12:07.527 Doorbell Buffer Config: Not Supported 00:12:07.527 Get LBA Status Capability: Not Supported 00:12:07.527 Command & Feature Lockdown Capability: Not Supported 00:12:07.527 Abort Command Limit: 4 00:12:07.527 Async Event Request Limit: 4 00:12:07.527 Number of Firmware Slots: N/A 00:12:07.527 Firmware Slot 1 Read-Only: N/A 00:12:07.528 Firmware Activation Without Reset: N/A 00:12:07.528 Multiple Update Detection Support: N/A 00:12:07.528 Firmware Update Granularity: No Information Provided 00:12:07.528 Per-Namespace SMART Log: No 00:12:07.528 Asymmetric Namespace Access Log Page: Not Supported 00:12:07.528 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:07.528 Command Effects Log Page: Supported 00:12:07.528 Get Log Page Extended Data: Supported 00:12:07.528 Telemetry Log Pages: Not Supported 00:12:07.528 Persistent Event Log Pages: Not Supported 00:12:07.528 Supported Log Pages Log Page: May Support 00:12:07.528 Commands Supported & Effects Log Page: Not Supported 00:12:07.528 Feature Identifiers & Effects Log Page:May Support 00:12:07.528 NVMe-MI Commands & Effects Log Page: May Support 00:12:07.528 Data Area 4 for Telemetry Log: Not Supported 00:12:07.528 Error Log Page Entries Supported: 128 00:12:07.528 Keep Alive: Supported 00:12:07.528 Keep Alive Granularity: 10000 ms 00:12:07.528 00:12:07.528 NVM Command Set Attributes 00:12:07.528 ========================== 00:12:07.528 Submission Queue Entry Size 00:12:07.528 Max: 64 00:12:07.528 Min: 64 00:12:07.528 Completion Queue Entry Size 00:12:07.528 Max: 16 00:12:07.528 Min: 16 00:12:07.528 Number of Namespaces: 32 00:12:07.528 Compare Command: Supported 00:12:07.528 Write Uncorrectable Command: Not Supported 00:12:07.528 Dataset Management Command: Supported 00:12:07.528 Write Zeroes Command: Supported 00:12:07.528 Set Features Save Field: Not Supported 00:12:07.528 Reservations: Not Supported 00:12:07.528 Timestamp: Not Supported 00:12:07.528 Copy: Supported 00:12:07.528 Volatile Write Cache: Present 00:12:07.528 Atomic Write Unit (Normal): 1 00:12:07.528 Atomic Write Unit (PFail): 1 00:12:07.528 Atomic Compare & Write Unit: 1 00:12:07.528 Fused Compare & Write: Supported 00:12:07.528 Scatter-Gather List 00:12:07.528 SGL Command Set: Supported (Dword aligned) 00:12:07.528 SGL Keyed: Not Supported 00:12:07.528 SGL Bit Bucket Descriptor: Not Supported 00:12:07.528 SGL Metadata Pointer: Not Supported 00:12:07.528 Oversized SGL: Not Supported 00:12:07.528 SGL Metadata Address: Not Supported 00:12:07.528 SGL Offset: Not Supported 00:12:07.528 Transport SGL Data Block: Not Supported 00:12:07.528 Replay Protected Memory Block: Not Supported 00:12:07.528 00:12:07.528 Firmware Slot Information 00:12:07.528 ========================= 00:12:07.528 Active slot: 1 00:12:07.528 Slot 1 Firmware Revision: 24.09 00:12:07.528 00:12:07.528 00:12:07.528 Commands Supported and Effects 00:12:07.528 ============================== 00:12:07.528 Admin Commands 00:12:07.528 -------------- 00:12:07.528 Get Log Page (02h): Supported 00:12:07.528 Identify (06h): Supported 00:12:07.528 Abort (08h): Supported 00:12:07.528 Set Features (09h): Supported 00:12:07.528 Get Features (0Ah): Supported 00:12:07.528 Asynchronous Event Request (0Ch): Supported 00:12:07.528 Keep Alive (18h): Supported 00:12:07.528 I/O Commands 00:12:07.528 ------------ 00:12:07.528 Flush (00h): Supported LBA-Change 00:12:07.528 Write (01h): Supported LBA-Change 00:12:07.528 Read (02h): Supported 00:12:07.528 Compare (05h): Supported 00:12:07.528 Write Zeroes (08h): Supported LBA-Change 00:12:07.528 Dataset Management (09h): Supported LBA-Change 00:12:07.528 Copy (19h): Supported LBA-Change 00:12:07.528 00:12:07.528 Error Log 00:12:07.528 ========= 00:12:07.528 00:12:07.528 Arbitration 00:12:07.528 =========== 00:12:07.528 Arbitration Burst: 1 00:12:07.528 00:12:07.528 Power Management 00:12:07.528 ================ 00:12:07.528 Number of Power States: 1 00:12:07.528 Current Power State: Power State #0 00:12:07.528 Power State #0: 00:12:07.528 Max Power: 0.00 W 00:12:07.528 Non-Operational State: Operational 00:12:07.528 Entry Latency: Not Reported 00:12:07.528 Exit Latency: Not Reported 00:12:07.528 Relative Read Throughput: 0 00:12:07.528 Relative Read Latency: 0 00:12:07.528 Relative Write Throughput: 0 00:12:07.528 Relative Write Latency: 0 00:12:07.528 Idle Power: Not Reported 00:12:07.528 Active Power: Not Reported 00:12:07.528 Non-Operational Permissive Mode: Not Supported 00:12:07.528 00:12:07.528 Health Information 00:12:07.528 ================== 00:12:07.528 Critical Warnings: 00:12:07.528 Available Spare Space: OK 00:12:07.528 Temperature: OK 00:12:07.528 Device Reliability: OK 00:12:07.528 Read Only: No 00:12:07.528 Volatile Memory Backup: OK 00:12:07.528 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:07.528 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:07.528 Available Spare: 0% 00:12:07.528 Available Sp[2024-07-15 15:53:34.310075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:07.528 [2024-07-15 15:53:34.317889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:07.528 [2024-07-15 15:53:34.317939] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:07.528 [2024-07-15 15:53:34.317957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.528 [2024-07-15 15:53:34.317968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.528 [2024-07-15 15:53:34.317979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.528 [2024-07-15 15:53:34.317989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.528 [2024-07-15 15:53:34.318055] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:07.528 [2024-07-15 15:53:34.318075] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:07.528 [2024-07-15 15:53:34.319057] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:07.528 [2024-07-15 15:53:34.319144] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:07.528 [2024-07-15 15:53:34.319160] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:07.528 [2024-07-15 15:53:34.320071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:07.528 [2024-07-15 15:53:34.320097] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:07.528 [2024-07-15 15:53:34.320151] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:07.528 [2024-07-15 15:53:34.322887] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:07.528 are Threshold: 0% 00:12:07.528 Life Percentage Used: 0% 00:12:07.528 Data Units Read: 0 00:12:07.528 Data Units Written: 0 00:12:07.528 Host Read Commands: 0 00:12:07.528 Host Write Commands: 0 00:12:07.528 Controller Busy Time: 0 minutes 00:12:07.528 Power Cycles: 0 00:12:07.528 Power On Hours: 0 hours 00:12:07.528 Unsafe Shutdowns: 0 00:12:07.528 Unrecoverable Media Errors: 0 00:12:07.528 Lifetime Error Log Entries: 0 00:12:07.528 Warning Temperature Time: 0 minutes 00:12:07.528 Critical Temperature Time: 0 minutes 00:12:07.528 00:12:07.528 Number of Queues 00:12:07.528 ================ 00:12:07.528 Number of I/O Submission Queues: 127 00:12:07.528 Number of I/O Completion Queues: 127 00:12:07.528 00:12:07.528 Active Namespaces 00:12:07.528 ================= 00:12:07.528 Namespace ID:1 00:12:07.528 Error Recovery Timeout: Unlimited 00:12:07.528 Command Set Identifier: NVM (00h) 00:12:07.528 Deallocate: Supported 00:12:07.528 Deallocated/Unwritten Error: Not Supported 00:12:07.528 Deallocated Read Value: Unknown 00:12:07.528 Deallocate in Write Zeroes: Not Supported 00:12:07.528 Deallocated Guard Field: 0xFFFF 00:12:07.528 Flush: Supported 00:12:07.528 Reservation: Supported 00:12:07.529 Namespace Sharing Capabilities: Multiple Controllers 00:12:07.529 Size (in LBAs): 131072 (0GiB) 00:12:07.529 Capacity (in LBAs): 131072 (0GiB) 00:12:07.529 Utilization (in LBAs): 131072 (0GiB) 00:12:07.529 NGUID: C462BDAEF7A34E90A6490C8F6B6E0050 00:12:07.529 UUID: c462bdae-f7a3-4e90-a649-0c8f6b6e0050 00:12:07.529 Thin Provisioning: Not Supported 00:12:07.529 Per-NS Atomic Units: Yes 00:12:07.529 Atomic Boundary Size (Normal): 0 00:12:07.529 Atomic Boundary Size (PFail): 0 00:12:07.529 Atomic Boundary Offset: 0 00:12:07.529 Maximum Single Source Range Length: 65535 00:12:07.529 Maximum Copy Length: 65535 00:12:07.529 Maximum Source Range Count: 1 00:12:07.529 NGUID/EUI64 Never Reused: No 00:12:07.529 Namespace Write Protected: No 00:12:07.529 Number of LBA Formats: 1 00:12:07.529 Current LBA Format: LBA Format #00 00:12:07.529 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:07.529 00:12:07.529 15:53:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:07.529 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.802 [2024-07-15 15:53:34.562650] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:13.079 Initializing NVMe Controllers 00:12:13.079 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:13.079 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:13.079 Initialization complete. Launching workers. 00:12:13.079 ======================================================== 00:12:13.079 Latency(us) 00:12:13.079 Device Information : IOPS MiB/s Average min max 00:12:13.079 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33665.98 131.51 3801.47 1173.84 7381.03 00:12:13.079 ======================================================== 00:12:13.079 Total : 33665.98 131.51 3801.47 1173.84 7381.03 00:12:13.079 00:12:13.079 [2024-07-15 15:53:39.663243] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:13.079 15:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:13.079 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.079 [2024-07-15 15:53:39.894885] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.388 Initializing NVMe Controllers 00:12:18.388 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:18.388 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:18.388 Initialization complete. Launching workers. 00:12:18.388 ======================================================== 00:12:18.388 Latency(us) 00:12:18.388 Device Information : IOPS MiB/s Average min max 00:12:18.388 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29078.12 113.59 4400.75 1278.56 8215.05 00:12:18.388 ======================================================== 00:12:18.388 Total : 29078.12 113.59 4400.75 1278.56 8215.05 00:12:18.388 00:12:18.388 [2024-07-15 15:53:44.915390] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.388 15:53:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:18.388 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.388 [2024-07-15 15:53:45.127319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:23.658 [2024-07-15 15:53:50.273043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:23.658 Initializing NVMe Controllers 00:12:23.658 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:23.658 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:23.658 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:23.658 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:23.658 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:23.658 Initialization complete. Launching workers. 00:12:23.658 Starting thread on core 2 00:12:23.658 Starting thread on core 3 00:12:23.658 Starting thread on core 1 00:12:23.658 15:53:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:23.658 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.658 [2024-07-15 15:53:50.571381] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:27.854 [2024-07-15 15:53:54.113792] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:27.854 Initializing NVMe Controllers 00:12:27.854 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:27.854 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:27.854 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:27.855 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:27.855 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:27.855 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:27.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:27.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:27.855 Initialization complete. Launching workers. 00:12:27.855 Starting thread on core 1 with urgent priority queue 00:12:27.855 Starting thread on core 2 with urgent priority queue 00:12:27.855 Starting thread on core 3 with urgent priority queue 00:12:27.855 Starting thread on core 0 with urgent priority queue 00:12:27.855 SPDK bdev Controller (SPDK2 ) core 0: 3516.33 IO/s 28.44 secs/100000 ios 00:12:27.855 SPDK bdev Controller (SPDK2 ) core 1: 2645.67 IO/s 37.80 secs/100000 ios 00:12:27.855 SPDK bdev Controller (SPDK2 ) core 2: 3407.00 IO/s 29.35 secs/100000 ios 00:12:27.855 SPDK bdev Controller (SPDK2 ) core 3: 3521.33 IO/s 28.40 secs/100000 ios 00:12:27.855 ======================================================== 00:12:27.855 00:12:27.855 15:53:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:27.855 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.855 [2024-07-15 15:53:54.410868] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:27.855 Initializing NVMe Controllers 00:12:27.855 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:27.855 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:27.855 Namespace ID: 1 size: 0GB 00:12:27.855 Initialization complete. 00:12:27.855 INFO: using host memory buffer for IO 00:12:27.855 Hello world! 00:12:27.855 [2024-07-15 15:53:54.422969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:27.855 15:53:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:27.855 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.855 [2024-07-15 15:53:54.701645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.234 Initializing NVMe Controllers 00:12:29.234 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:29.234 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:29.234 Initialization complete. Launching workers. 00:12:29.234 submit (in ns) avg, min, max = 6684.8, 3502.2, 4016423.3 00:12:29.234 complete (in ns) avg, min, max = 28602.2, 2060.0, 4024208.9 00:12:29.234 00:12:29.234 Submit histogram 00:12:29.234 ================ 00:12:29.234 Range in us Cumulative Count 00:12:29.234 3.484 - 3.508: 0.0153% ( 2) 00:12:29.234 3.508 - 3.532: 0.5599% ( 71) 00:12:29.234 3.532 - 3.556: 1.4803% ( 120) 00:12:29.234 3.556 - 3.579: 4.2568% ( 362) 00:12:29.234 3.579 - 3.603: 8.7283% ( 583) 00:12:29.234 3.603 - 3.627: 16.6820% ( 1037) 00:12:29.234 3.627 - 3.650: 25.4333% ( 1141) 00:12:29.234 3.650 - 3.674: 35.0054% ( 1248) 00:12:29.234 3.674 - 3.698: 43.3655% ( 1090) 00:12:29.234 3.698 - 3.721: 51.0048% ( 996) 00:12:29.234 3.721 - 3.745: 55.9979% ( 651) 00:12:29.234 3.745 - 3.769: 61.1137% ( 667) 00:12:29.234 3.769 - 3.793: 65.3091% ( 547) 00:12:29.234 3.793 - 3.816: 68.8679% ( 464) 00:12:29.234 3.816 - 3.840: 72.6799% ( 497) 00:12:29.234 3.840 - 3.864: 76.3844% ( 483) 00:12:29.234 3.864 - 3.887: 79.8052% ( 446) 00:12:29.234 3.887 - 3.911: 82.9345% ( 408) 00:12:29.234 3.911 - 3.935: 85.5883% ( 346) 00:12:29.234 3.935 - 3.959: 87.7282% ( 279) 00:12:29.234 3.959 - 3.982: 89.4309% ( 222) 00:12:29.234 3.982 - 4.006: 91.2947% ( 243) 00:12:29.234 4.006 - 4.030: 92.9130% ( 211) 00:12:29.234 4.030 - 4.053: 93.9561% ( 136) 00:12:29.234 4.053 - 4.077: 94.8228% ( 113) 00:12:29.234 4.077 - 4.101: 95.5285% ( 92) 00:12:29.234 4.101 - 4.124: 96.0040% ( 62) 00:12:29.234 4.124 - 4.148: 96.3031% ( 39) 00:12:29.234 4.148 - 4.172: 96.6406% ( 44) 00:12:29.234 4.172 - 4.196: 96.8400% ( 26) 00:12:29.234 4.196 - 4.219: 97.0011% ( 21) 00:12:29.234 4.219 - 4.243: 97.1008% ( 13) 00:12:29.234 4.243 - 4.267: 97.2388% ( 18) 00:12:29.234 4.267 - 4.290: 97.3232% ( 11) 00:12:29.234 4.290 - 4.314: 97.4152% ( 12) 00:12:29.234 4.314 - 4.338: 97.4843% ( 9) 00:12:29.234 4.338 - 4.361: 97.5303% ( 6) 00:12:29.234 4.361 - 4.385: 97.5456% ( 2) 00:12:29.234 4.385 - 4.409: 97.5610% ( 2) 00:12:29.234 4.409 - 4.433: 97.5763% ( 2) 00:12:29.234 4.433 - 4.456: 97.5917% ( 2) 00:12:29.234 4.480 - 4.504: 97.5993% ( 1) 00:12:29.234 4.504 - 4.527: 97.6147% ( 2) 00:12:29.234 4.551 - 4.575: 97.6223% ( 1) 00:12:29.234 4.575 - 4.599: 97.6300% ( 1) 00:12:29.234 4.599 - 4.622: 97.6453% ( 2) 00:12:29.234 4.646 - 4.670: 97.6530% ( 1) 00:12:29.234 4.693 - 4.717: 97.6760% ( 3) 00:12:29.234 4.717 - 4.741: 97.7067% ( 4) 00:12:29.234 4.741 - 4.764: 97.7374% ( 4) 00:12:29.234 4.764 - 4.788: 97.7451% ( 1) 00:12:29.234 4.788 - 4.812: 97.7527% ( 1) 00:12:29.234 4.812 - 4.836: 97.7911% ( 5) 00:12:29.234 4.836 - 4.859: 97.8218% ( 4) 00:12:29.234 4.859 - 4.883: 97.8524% ( 4) 00:12:29.234 4.883 - 4.907: 97.8754% ( 3) 00:12:29.234 4.907 - 4.930: 97.9291% ( 7) 00:12:29.234 4.930 - 4.954: 97.9445% ( 2) 00:12:29.234 4.954 - 4.978: 97.9751% ( 4) 00:12:29.234 4.978 - 5.001: 97.9905% ( 2) 00:12:29.234 5.001 - 5.025: 98.0595% ( 9) 00:12:29.234 5.025 - 5.049: 98.0979% ( 5) 00:12:29.235 5.049 - 5.073: 98.1132% ( 2) 00:12:29.235 5.073 - 5.096: 98.1439% ( 4) 00:12:29.235 5.096 - 5.120: 98.1746% ( 4) 00:12:29.235 5.120 - 5.144: 98.1976% ( 3) 00:12:29.235 5.144 - 5.167: 98.2206% ( 3) 00:12:29.235 5.167 - 5.191: 98.2589% ( 5) 00:12:29.235 5.191 - 5.215: 98.3126% ( 7) 00:12:29.235 5.215 - 5.239: 98.3203% ( 1) 00:12:29.235 5.239 - 5.262: 98.3356% ( 2) 00:12:29.235 5.262 - 5.286: 98.3510% ( 2) 00:12:29.235 5.286 - 5.310: 98.3893% ( 5) 00:12:29.235 5.310 - 5.333: 98.4123% ( 3) 00:12:29.235 5.381 - 5.404: 98.4277% ( 2) 00:12:29.235 5.404 - 5.428: 98.4353% ( 1) 00:12:29.235 5.452 - 5.476: 98.4507% ( 2) 00:12:29.235 5.499 - 5.523: 98.4584% ( 1) 00:12:29.235 5.547 - 5.570: 98.4660% ( 1) 00:12:29.235 5.713 - 5.736: 98.4737% ( 1) 00:12:29.235 5.807 - 5.831: 98.4814% ( 1) 00:12:29.235 5.831 - 5.855: 98.4890% ( 1) 00:12:29.235 5.855 - 5.879: 98.4967% ( 1) 00:12:29.235 5.926 - 5.950: 98.5044% ( 1) 00:12:29.235 5.950 - 5.973: 98.5120% ( 1) 00:12:29.235 6.044 - 6.068: 98.5197% ( 1) 00:12:29.235 6.116 - 6.163: 98.5351% ( 2) 00:12:29.235 6.210 - 6.258: 98.5427% ( 1) 00:12:29.235 6.258 - 6.305: 98.5581% ( 2) 00:12:29.235 6.305 - 6.353: 98.5734% ( 2) 00:12:29.235 6.400 - 6.447: 98.5811% ( 1) 00:12:29.235 6.684 - 6.732: 98.5964% ( 2) 00:12:29.235 6.732 - 6.779: 98.6041% ( 1) 00:12:29.235 6.779 - 6.827: 98.6118% ( 1) 00:12:29.235 6.921 - 6.969: 98.6271% ( 2) 00:12:29.235 6.969 - 7.016: 98.6424% ( 2) 00:12:29.235 7.016 - 7.064: 98.6501% ( 1) 00:12:29.235 7.064 - 7.111: 98.6578% ( 1) 00:12:29.235 7.111 - 7.159: 98.6654% ( 1) 00:12:29.235 7.206 - 7.253: 98.6731% ( 1) 00:12:29.235 7.301 - 7.348: 98.6808% ( 1) 00:12:29.235 7.348 - 7.396: 98.6961% ( 2) 00:12:29.235 7.396 - 7.443: 98.7115% ( 2) 00:12:29.235 7.490 - 7.538: 98.7191% ( 1) 00:12:29.235 7.585 - 7.633: 98.7345% ( 2) 00:12:29.235 7.633 - 7.680: 98.7421% ( 1) 00:12:29.235 7.680 - 7.727: 98.7651% ( 3) 00:12:29.235 7.822 - 7.870: 98.7728% ( 1) 00:12:29.235 7.964 - 8.012: 98.7958% ( 3) 00:12:29.235 8.059 - 8.107: 98.8035% ( 1) 00:12:29.235 8.154 - 8.201: 98.8112% ( 1) 00:12:29.235 8.391 - 8.439: 98.8188% ( 1) 00:12:29.235 8.628 - 8.676: 98.8265% ( 1) 00:12:29.235 8.676 - 8.723: 98.8342% ( 1) 00:12:29.235 8.723 - 8.770: 98.8418% ( 1) 00:12:29.235 8.913 - 8.960: 98.8649% ( 3) 00:12:29.235 9.055 - 9.102: 98.8725% ( 1) 00:12:29.235 9.481 - 9.529: 98.8802% ( 1) 00:12:29.235 9.576 - 9.624: 98.8879% ( 1) 00:12:29.235 9.861 - 9.908: 98.8955% ( 1) 00:12:29.235 10.240 - 10.287: 98.9032% ( 1) 00:12:29.235 10.335 - 10.382: 98.9109% ( 1) 00:12:29.235 10.430 - 10.477: 98.9185% ( 1) 00:12:29.235 10.477 - 10.524: 98.9262% ( 1) 00:12:29.235 10.524 - 10.572: 98.9416% ( 2) 00:12:29.235 10.619 - 10.667: 98.9492% ( 1) 00:12:29.235 10.809 - 10.856: 98.9569% ( 1) 00:12:29.235 11.046 - 11.093: 98.9646% ( 1) 00:12:29.235 11.236 - 11.283: 98.9722% ( 1) 00:12:29.235 11.330 - 11.378: 98.9799% ( 1) 00:12:29.235 11.567 - 11.615: 98.9876% ( 1) 00:12:29.235 11.615 - 11.662: 99.0029% ( 2) 00:12:29.235 11.662 - 11.710: 99.0106% ( 1) 00:12:29.235 11.804 - 11.852: 99.0183% ( 1) 00:12:29.235 11.899 - 11.947: 99.0259% ( 1) 00:12:29.235 11.994 - 12.041: 99.0336% ( 1) 00:12:29.235 12.136 - 12.231: 99.0413% ( 1) 00:12:29.235 12.231 - 12.326: 99.0489% ( 1) 00:12:29.235 12.610 - 12.705: 99.0566% ( 1) 00:12:29.235 12.990 - 13.084: 99.0796% ( 3) 00:12:29.235 13.179 - 13.274: 99.0873% ( 1) 00:12:29.235 13.369 - 13.464: 99.0950% ( 1) 00:12:29.235 14.033 - 14.127: 99.1103% ( 2) 00:12:29.235 14.317 - 14.412: 99.1180% ( 1) 00:12:29.235 14.412 - 14.507: 99.1333% ( 2) 00:12:29.235 14.507 - 14.601: 99.1410% ( 1) 00:12:29.235 14.696 - 14.791: 99.1563% ( 2) 00:12:29.235 14.791 - 14.886: 99.1640% ( 1) 00:12:29.235 16.972 - 17.067: 99.1717% ( 1) 00:12:29.235 17.256 - 17.351: 99.1793% ( 1) 00:12:29.235 17.351 - 17.446: 99.1870% ( 1) 00:12:29.235 17.446 - 17.541: 99.2484% ( 8) 00:12:29.235 17.541 - 17.636: 99.2714% ( 3) 00:12:29.235 17.636 - 17.730: 99.2944% ( 3) 00:12:29.235 17.730 - 17.825: 99.3557% ( 8) 00:12:29.235 17.825 - 17.920: 99.3864% ( 4) 00:12:29.235 17.920 - 18.015: 99.4248% ( 5) 00:12:29.235 18.015 - 18.110: 99.4708% ( 6) 00:12:29.235 18.110 - 18.204: 99.4938% ( 3) 00:12:29.235 18.204 - 18.299: 99.5245% ( 4) 00:12:29.235 18.299 - 18.394: 99.5475% ( 3) 00:12:29.235 18.394 - 18.489: 99.6088% ( 8) 00:12:29.235 18.489 - 18.584: 99.6549% ( 6) 00:12:29.235 18.584 - 18.679: 99.6702% ( 2) 00:12:29.235 18.679 - 18.773: 99.6932% ( 3) 00:12:29.235 18.773 - 18.868: 99.7162% ( 3) 00:12:29.235 18.868 - 18.963: 99.7316% ( 2) 00:12:29.235 18.963 - 19.058: 99.7622% ( 4) 00:12:29.235 19.058 - 19.153: 99.8006% ( 5) 00:12:29.235 19.153 - 19.247: 99.8083% ( 1) 00:12:29.235 19.342 - 19.437: 99.8236% ( 2) 00:12:29.235 19.437 - 19.532: 99.8313% ( 1) 00:12:29.235 19.627 - 19.721: 99.8389% ( 1) 00:12:29.235 19.721 - 19.816: 99.8543% ( 2) 00:12:29.235 20.006 - 20.101: 99.8773% ( 3) 00:12:29.235 20.101 - 20.196: 99.8850% ( 1) 00:12:29.235 20.575 - 20.670: 99.8926% ( 1) 00:12:29.235 21.144 - 21.239: 99.9003% ( 1) 00:12:29.235 21.428 - 21.523: 99.9080% ( 1) 00:12:29.235 25.600 - 25.790: 99.9156% ( 1) 00:12:29.235 28.634 - 28.824: 99.9310% ( 2) 00:12:29.235 3980.705 - 4004.978: 99.9540% ( 3) 00:12:29.235 4004.978 - 4029.250: 100.0000% ( 6) 00:12:29.235 00:12:29.235 Complete histogram 00:12:29.235 ================== 00:12:29.235 Range in us Cumulative Count 00:12:29.235 2.050 - 2.062: 0.0767% ( 10) 00:12:29.235 2.062 - 2.074: 26.1160% ( 3395) 00:12:29.235 2.074 - 2.086: 42.7443% ( 2168) 00:12:29.235 2.086 - 2.098: 45.1066% ( 308) 00:12:29.235 2.098 - 2.110: 57.0640% ( 1559) 00:12:29.235 2.110 - 2.121: 60.4234% ( 438) 00:12:29.235 2.121 - 2.133: 62.9161% ( 325) 00:12:29.235 2.133 - 2.145: 76.7142% ( 1799) 00:12:29.235 2.145 - 2.157: 80.1503% ( 448) 00:12:29.235 2.157 - 2.169: 82.3823% ( 291) 00:12:29.235 2.169 - 2.181: 86.6007% ( 550) 00:12:29.235 2.181 - 2.193: 87.9890% ( 181) 00:12:29.235 2.193 - 2.204: 88.7713% ( 102) 00:12:29.235 2.204 - 2.216: 90.7194% ( 254) 00:12:29.235 2.216 - 2.228: 92.3301% ( 210) 00:12:29.235 2.228 - 2.240: 93.8871% ( 203) 00:12:29.235 2.240 - 2.252: 94.7615% ( 114) 00:12:29.236 2.252 - 2.264: 95.0606% ( 39) 00:12:29.236 2.264 - 2.276: 95.1833% ( 16) 00:12:29.236 2.276 - 2.287: 95.2907% ( 14) 00:12:29.236 2.287 - 2.299: 95.6358% ( 45) 00:12:29.236 2.299 - 2.311: 95.9043% ( 35) 00:12:29.236 2.311 - 2.323: 96.0193% ( 15) 00:12:29.236 2.323 - 2.335: 96.1267% ( 14) 00:12:29.236 2.335 - 2.347: 96.2418% ( 15) 00:12:29.236 2.347 - 2.359: 96.4719% ( 30) 00:12:29.236 2.359 - 2.370: 96.8323% ( 47) 00:12:29.236 2.370 - 2.382: 97.1621% ( 43) 00:12:29.236 2.382 - 2.394: 97.5456% ( 50) 00:12:29.236 2.394 - 2.406: 97.7987% ( 33) 00:12:29.236 2.406 - 2.418: 98.0058% ( 27) 00:12:29.236 2.418 - 2.430: 98.1285% ( 16) 00:12:29.236 2.430 - 2.441: 98.2359% ( 14) 00:12:29.236 2.441 - 2.453: 98.3050% ( 9) 00:12:29.236 2.453 - 2.465: 98.3663% ( 8) 00:12:29.236 2.465 - 2.477: 98.3970% ( 4) 00:12:29.236 2.477 - 2.489: 98.4507% ( 7) 00:12:29.236 2.489 - 2.501: 98.4660% ( 2) 00:12:29.236 2.501 - 2.513: 98.4737% ( 1) 00:12:29.236 2.513 - 2.524: 98.4890% ( 2) 00:12:29.236 2.524 - 2.536: 98.4967% ( 1) 00:12:29.236 2.536 - 2.548: 98.5044% ( 1) 00:12:29.236 2.560 - 2.572: 98.5197% ( 2) 00:12:29.236 2.584 - 2.596: 98.5274% ( 1) 00:12:29.236 2.643 - 2.655: 98.5351% ( 1) 00:12:29.236 2.702 - 2.714: 98.5427% ( 1) 00:12:29.236 2.714 - 2.726: 98.5504% ( 1) 00:12:29.236 2.726 - 2.738: 98.5581% ( 1) 00:12:29.236 2.738 - 2.750: 98.5657% ( 1) 00:12:29.236 2.856 - 2.868: 98.5734% ( 1) 00:12:29.236 2.951 - 2.963: 98.5811% ( 1) 00:12:29.236 3.200 - 3.224: 98.5887% ( 1) 00:12:29.236 3.271 - 3.295: 98.6041% ( 2) 00:12:29.236 3.342 - 3.366: 98.6118% ( 1) 00:12:29.236 3.390 - 3.413: 98.6194% ( 1) 00:12:29.236 3.437 - 3.461: 98.6271% ( 1) 00:12:29.236 3.484 - 3.508: 98.6424% ( 2) 00:12:29.236 3.532 - 3.556: 98.6578% ( 2) 00:12:29.236 3.603 - 3.627: 98.6654% ( 1) 00:12:29.236 3.674 - 3.698: 98.6731% ( 1) 00:12:29.236 3.721 - 3.745: 98.6808% ( 1) 00:12:29.236 3.769 - 3.793: 98.6884% ( 1) 00:12:29.236 3.793 - 3.816: 98.6961% ( 1) 00:12:29.236 3.887 - 3.911: 98.7115% ( 2) 00:12:29.236 3.911 - 3.935: 98.7191% ( 1) 00:12:29.236 4.053 - 4.077: 98.7268% ( 1) 00:12:29.236 4.599 - 4.622: 98.7345% ( 1) 00:12:29.236 4.670 - 4.693: 98.7421% ( 1) 00:12:29.236 4.883 - 4.907: 9[2024-07-15 15:53:55.795602] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.236 8.7498% ( 1) 00:12:29.236 4.930 - 4.954: 98.7575% ( 1) 00:12:29.236 5.215 - 5.239: 98.7651% ( 1) 00:12:29.236 5.381 - 5.404: 98.7728% ( 1) 00:12:29.236 5.523 - 5.547: 98.7805% ( 1) 00:12:29.236 5.547 - 5.570: 98.7882% ( 1) 00:12:29.236 5.594 - 5.618: 98.8112% ( 3) 00:12:29.236 5.736 - 5.760: 98.8188% ( 1) 00:12:29.236 5.855 - 5.879: 98.8265% ( 1) 00:12:29.236 5.973 - 5.997: 98.8342% ( 1) 00:12:29.236 6.021 - 6.044: 98.8418% ( 1) 00:12:29.236 6.068 - 6.116: 98.8495% ( 1) 00:12:29.236 6.116 - 6.163: 98.8572% ( 1) 00:12:29.236 6.779 - 6.827: 98.8649% ( 1) 00:12:29.236 6.921 - 6.969: 98.8725% ( 1) 00:12:29.236 7.253 - 7.301: 98.8879% ( 2) 00:12:29.236 7.538 - 7.585: 98.8955% ( 1) 00:12:29.236 8.581 - 8.628: 98.9032% ( 1) 00:12:29.236 15.455 - 15.550: 98.9109% ( 1) 00:12:29.236 15.739 - 15.834: 98.9185% ( 1) 00:12:29.236 15.834 - 15.929: 98.9339% ( 2) 00:12:29.236 15.929 - 16.024: 98.9569% ( 3) 00:12:29.236 16.024 - 16.119: 98.9722% ( 2) 00:12:29.236 16.119 - 16.213: 98.9799% ( 1) 00:12:29.236 16.213 - 16.308: 99.0106% ( 4) 00:12:29.236 16.308 - 16.403: 99.0259% ( 2) 00:12:29.236 16.403 - 16.498: 99.0489% ( 3) 00:12:29.236 16.498 - 16.593: 99.0643% ( 2) 00:12:29.236 16.593 - 16.687: 99.0950% ( 4) 00:12:29.236 16.687 - 16.782: 99.1026% ( 1) 00:12:29.236 16.782 - 16.877: 99.1486% ( 6) 00:12:29.236 16.877 - 16.972: 99.1563% ( 1) 00:12:29.236 16.972 - 17.067: 99.1793% ( 3) 00:12:29.236 17.067 - 17.161: 99.2100% ( 4) 00:12:29.236 17.161 - 17.256: 99.2253% ( 2) 00:12:29.236 17.256 - 17.351: 99.2484% ( 3) 00:12:29.236 17.351 - 17.446: 99.2560% ( 1) 00:12:29.236 17.446 - 17.541: 99.2637% ( 1) 00:12:29.236 17.541 - 17.636: 99.2714% ( 1) 00:12:29.236 17.636 - 17.730: 99.3020% ( 4) 00:12:29.236 17.920 - 18.015: 99.3097% ( 1) 00:12:29.236 18.015 - 18.110: 99.3174% ( 1) 00:12:29.236 18.394 - 18.489: 99.3250% ( 1) 00:12:29.236 23.135 - 23.230: 99.3327% ( 1) 00:12:29.236 24.652 - 24.841: 99.3404% ( 1) 00:12:29.236 3980.705 - 4004.978: 99.7085% ( 48) 00:12:29.236 4004.978 - 4029.250: 100.0000% ( 38) 00:12:29.236 00:12:29.236 15:53:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:29.236 15:53:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:29.236 15:53:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:29.236 15:53:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:29.236 15:53:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.236 [ 00:12:29.236 { 00:12:29.236 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.236 "subtype": "Discovery", 00:12:29.236 "listen_addresses": [], 00:12:29.236 "allow_any_host": true, 00:12:29.236 "hosts": [] 00:12:29.236 }, 00:12:29.236 { 00:12:29.236 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:29.236 "subtype": "NVMe", 00:12:29.236 "listen_addresses": [ 00:12:29.236 { 00:12:29.236 "trtype": "VFIOUSER", 00:12:29.236 "adrfam": "IPv4", 00:12:29.236 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:29.236 "trsvcid": "0" 00:12:29.236 } 00:12:29.236 ], 00:12:29.236 "allow_any_host": true, 00:12:29.236 "hosts": [], 00:12:29.236 "serial_number": "SPDK1", 00:12:29.236 "model_number": "SPDK bdev Controller", 00:12:29.236 "max_namespaces": 32, 00:12:29.236 "min_cntlid": 1, 00:12:29.236 "max_cntlid": 65519, 00:12:29.236 "namespaces": [ 00:12:29.236 { 00:12:29.236 "nsid": 1, 00:12:29.236 "bdev_name": "Malloc1", 00:12:29.236 "name": "Malloc1", 00:12:29.236 "nguid": "D9B14EB9A3B541ACB2C0A24C2371A929", 00:12:29.236 "uuid": "d9b14eb9-a3b5-41ac-b2c0-a24c2371a929" 00:12:29.236 }, 00:12:29.236 { 00:12:29.236 "nsid": 2, 00:12:29.236 "bdev_name": "Malloc3", 00:12:29.237 "name": "Malloc3", 00:12:29.237 "nguid": "0375881A7FC448F5BB9E5427EB61F74C", 00:12:29.237 "uuid": "0375881a-7fc4-48f5-bb9e-5427eb61f74c" 00:12:29.237 } 00:12:29.237 ] 00:12:29.237 }, 00:12:29.237 { 00:12:29.237 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:29.237 "subtype": "NVMe", 00:12:29.237 "listen_addresses": [ 00:12:29.237 { 00:12:29.237 "trtype": "VFIOUSER", 00:12:29.237 "adrfam": "IPv4", 00:12:29.237 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:29.237 "trsvcid": "0" 00:12:29.237 } 00:12:29.237 ], 00:12:29.237 "allow_any_host": true, 00:12:29.237 "hosts": [], 00:12:29.237 "serial_number": "SPDK2", 00:12:29.237 "model_number": "SPDK bdev Controller", 00:12:29.237 "max_namespaces": 32, 00:12:29.237 "min_cntlid": 1, 00:12:29.237 "max_cntlid": 65519, 00:12:29.237 "namespaces": [ 00:12:29.237 { 00:12:29.237 "nsid": 1, 00:12:29.237 "bdev_name": "Malloc2", 00:12:29.237 "name": "Malloc2", 00:12:29.237 "nguid": "C462BDAEF7A34E90A6490C8F6B6E0050", 00:12:29.237 "uuid": "c462bdae-f7a3-4e90-a649-0c8f6b6e0050" 00:12:29.237 } 00:12:29.237 ] 00:12:29.237 } 00:12:29.237 ] 00:12:29.237 15:53:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:29.237 15:53:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1104702 00:12:29.237 15:53:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:29.237 15:53:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:29.237 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:29.237 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.237 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.237 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:29.237 15:53:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:29.237 15:53:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:29.237 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.495 [2024-07-15 15:53:56.249563] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.496 Malloc4 00:12:29.496 15:53:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:29.754 [2024-07-15 15:53:56.609431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.754 15:53:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.754 Asynchronous Event Request test 00:12:29.754 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:29.754 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:29.754 Registering asynchronous event callbacks... 00:12:29.754 Starting namespace attribute notice tests for all controllers... 00:12:29.754 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:29.754 aer_cb - Changed Namespace 00:12:29.754 Cleaning up... 00:12:30.013 [ 00:12:30.013 { 00:12:30.013 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:30.013 "subtype": "Discovery", 00:12:30.013 "listen_addresses": [], 00:12:30.013 "allow_any_host": true, 00:12:30.013 "hosts": [] 00:12:30.013 }, 00:12:30.013 { 00:12:30.013 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:30.013 "subtype": "NVMe", 00:12:30.013 "listen_addresses": [ 00:12:30.013 { 00:12:30.013 "trtype": "VFIOUSER", 00:12:30.013 "adrfam": "IPv4", 00:12:30.013 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:30.013 "trsvcid": "0" 00:12:30.013 } 00:12:30.013 ], 00:12:30.013 "allow_any_host": true, 00:12:30.013 "hosts": [], 00:12:30.013 "serial_number": "SPDK1", 00:12:30.013 "model_number": "SPDK bdev Controller", 00:12:30.013 "max_namespaces": 32, 00:12:30.013 "min_cntlid": 1, 00:12:30.013 "max_cntlid": 65519, 00:12:30.013 "namespaces": [ 00:12:30.013 { 00:12:30.013 "nsid": 1, 00:12:30.013 "bdev_name": "Malloc1", 00:12:30.013 "name": "Malloc1", 00:12:30.013 "nguid": "D9B14EB9A3B541ACB2C0A24C2371A929", 00:12:30.013 "uuid": "d9b14eb9-a3b5-41ac-b2c0-a24c2371a929" 00:12:30.013 }, 00:12:30.013 { 00:12:30.013 "nsid": 2, 00:12:30.013 "bdev_name": "Malloc3", 00:12:30.013 "name": "Malloc3", 00:12:30.013 "nguid": "0375881A7FC448F5BB9E5427EB61F74C", 00:12:30.013 "uuid": "0375881a-7fc4-48f5-bb9e-5427eb61f74c" 00:12:30.013 } 00:12:30.013 ] 00:12:30.013 }, 00:12:30.013 { 00:12:30.013 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:30.013 "subtype": "NVMe", 00:12:30.013 "listen_addresses": [ 00:12:30.013 { 00:12:30.013 "trtype": "VFIOUSER", 00:12:30.013 "adrfam": "IPv4", 00:12:30.013 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:30.013 "trsvcid": "0" 00:12:30.013 } 00:12:30.013 ], 00:12:30.013 "allow_any_host": true, 00:12:30.013 "hosts": [], 00:12:30.013 "serial_number": "SPDK2", 00:12:30.013 "model_number": "SPDK bdev Controller", 00:12:30.013 "max_namespaces": 32, 00:12:30.013 "min_cntlid": 1, 00:12:30.013 "max_cntlid": 65519, 00:12:30.013 "namespaces": [ 00:12:30.013 { 00:12:30.013 "nsid": 1, 00:12:30.013 "bdev_name": "Malloc2", 00:12:30.013 "name": "Malloc2", 00:12:30.013 "nguid": "C462BDAEF7A34E90A6490C8F6B6E0050", 00:12:30.013 "uuid": "c462bdae-f7a3-4e90-a649-0c8f6b6e0050" 00:12:30.013 }, 00:12:30.013 { 00:12:30.013 "nsid": 2, 00:12:30.013 "bdev_name": "Malloc4", 00:12:30.013 "name": "Malloc4", 00:12:30.013 "nguid": "8B9ADEE8F70A4C46B552DFB78CB70352", 00:12:30.013 "uuid": "8b9adee8-f70a-4c46-b552-dfb78cb70352" 00:12:30.013 } 00:12:30.013 ] 00:12:30.013 } 00:12:30.013 ] 00:12:30.013 15:53:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1104702 00:12:30.013 15:53:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:30.013 15:53:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1099081 00:12:30.013 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1099081 ']' 00:12:30.013 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1099081 00:12:30.013 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:30.013 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:30.014 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1099081 00:12:30.014 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:30.014 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:30.014 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1099081' 00:12:30.014 killing process with pid 1099081 00:12:30.014 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1099081 00:12:30.014 15:53:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1099081 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1104843 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1104843' 00:12:30.581 Process pid: 1104843 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1104843 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1104843 ']' 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.581 15:53:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:30.581 [2024-07-15 15:53:57.316644] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:30.581 [2024-07-15 15:53:57.317631] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:30.581 [2024-07-15 15:53:57.317687] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.581 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.581 [2024-07-15 15:53:57.374236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.581 [2024-07-15 15:53:57.484128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.581 [2024-07-15 15:53:57.484200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.581 [2024-07-15 15:53:57.484213] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.581 [2024-07-15 15:53:57.484237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.581 [2024-07-15 15:53:57.484248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.581 [2024-07-15 15:53:57.484341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.581 [2024-07-15 15:53:57.484406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.581 [2024-07-15 15:53:57.484472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.581 [2024-07-15 15:53:57.484475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.841 [2024-07-15 15:53:57.585575] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:30.841 [2024-07-15 15:53:57.585790] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:30.841 [2024-07-15 15:53:57.586125] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:30.841 [2024-07-15 15:53:57.586762] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:30.841 [2024-07-15 15:53:57.587036] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:30.841 15:53:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.841 15:53:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:30.841 15:53:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:31.778 15:53:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:32.036 15:53:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:32.036 15:53:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:32.036 15:53:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:32.036 15:53:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:32.036 15:53:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:32.294 Malloc1 00:12:32.294 15:53:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:32.552 15:53:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:32.810 15:53:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:33.068 15:53:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:33.068 15:53:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:33.068 15:53:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:33.326 Malloc2 00:12:33.326 15:54:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:33.584 15:54:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:33.842 15:54:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:34.101 15:54:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:34.101 15:54:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1104843 00:12:34.101 15:54:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1104843 ']' 00:12:34.101 15:54:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1104843 00:12:34.101 15:54:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:34.101 15:54:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:34.101 15:54:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1104843 00:12:34.101 15:54:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:34.101 15:54:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:34.101 15:54:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1104843' 00:12:34.101 killing process with pid 1104843 00:12:34.101 15:54:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1104843 00:12:34.101 15:54:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1104843 00:12:34.359 15:54:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:34.359 15:54:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:34.359 00:12:34.359 real 0m53.614s 00:12:34.359 user 3m31.744s 00:12:34.359 sys 0m4.383s 00:12:34.359 15:54:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:34.359 15:54:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:34.359 ************************************ 00:12:34.359 END TEST nvmf_vfio_user 00:12:34.359 ************************************ 00:12:34.618 15:54:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:34.618 15:54:01 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:34.618 15:54:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:34.618 15:54:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.618 15:54:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:34.618 ************************************ 00:12:34.618 START TEST nvmf_vfio_user_nvme_compliance 00:12:34.618 ************************************ 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:34.618 * Looking for test storage... 00:12:34.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.618 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1105525 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1105525' 00:12:34.619 Process pid: 1105525 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1105525 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1105525 ']' 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.619 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:34.619 [2024-07-15 15:54:01.432364] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:34.619 [2024-07-15 15:54:01.432448] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.619 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.619 [2024-07-15 15:54:01.491934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:34.877 [2024-07-15 15:54:01.601177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.877 [2024-07-15 15:54:01.601233] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.877 [2024-07-15 15:54:01.601263] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.877 [2024-07-15 15:54:01.601275] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.877 [2024-07-15 15:54:01.601286] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.877 [2024-07-15 15:54:01.601418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.878 [2024-07-15 15:54:01.601466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.878 [2024-07-15 15:54:01.601469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.878 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.878 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:34.878 15:54:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:35.815 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:35.815 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:35.815 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:35.815 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.815 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:35.815 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.815 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:35.815 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:35.815 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.815 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:36.104 malloc0 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.104 15:54:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:36.104 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.104 00:12:36.104 00:12:36.104 CUnit - A unit testing framework for C - Version 2.1-3 00:12:36.104 http://cunit.sourceforge.net/ 00:12:36.104 00:12:36.104 00:12:36.104 Suite: nvme_compliance 00:12:36.104 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 15:54:02.943437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.104 [2024-07-15 15:54:02.944954] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:36.104 [2024-07-15 15:54:02.944980] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:36.104 [2024-07-15 15:54:02.944993] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:36.104 [2024-07-15 15:54:02.946458] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.104 passed 00:12:36.364 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 15:54:03.034119] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.364 [2024-07-15 15:54:03.037144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.364 passed 00:12:36.364 Test: admin_identify_ns ...[2024-07-15 15:54:03.127999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.364 [2024-07-15 15:54:03.188898] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:36.364 [2024-07-15 15:54:03.196911] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:36.364 [2024-07-15 15:54:03.218009] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.364 passed 00:12:36.624 Test: admin_get_features_mandatory_features ...[2024-07-15 15:54:03.301149] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.624 [2024-07-15 15:54:03.304182] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.624 passed 00:12:36.624 Test: admin_get_features_optional_features ...[2024-07-15 15:54:03.391766] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.624 [2024-07-15 15:54:03.394787] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.624 passed 00:12:36.625 Test: admin_set_features_number_of_queues ...[2024-07-15 15:54:03.478208] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.884 [2024-07-15 15:54:03.585993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.884 passed 00:12:36.884 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 15:54:03.667055] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.884 [2024-07-15 15:54:03.670078] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.884 passed 00:12:36.884 Test: admin_get_log_page_with_lpo ...[2024-07-15 15:54:03.756587] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.142 [2024-07-15 15:54:03.823894] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:37.142 [2024-07-15 15:54:03.836957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.142 passed 00:12:37.142 Test: fabric_property_get ...[2024-07-15 15:54:03.920905] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.142 [2024-07-15 15:54:03.922188] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:37.142 [2024-07-15 15:54:03.923928] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.142 passed 00:12:37.142 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 15:54:04.009497] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.142 [2024-07-15 15:54:04.010788] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:37.142 [2024-07-15 15:54:04.012516] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.142 passed 00:12:37.403 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 15:54:04.097957] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.403 [2024-07-15 15:54:04.183904] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:37.403 [2024-07-15 15:54:04.199891] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:37.403 [2024-07-15 15:54:04.205013] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.403 passed 00:12:37.403 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 15:54:04.289705] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.403 [2024-07-15 15:54:04.291025] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:37.403 [2024-07-15 15:54:04.292737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.403 passed 00:12:37.662 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 15:54:04.375278] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.662 [2024-07-15 15:54:04.454905] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:37.662 [2024-07-15 15:54:04.478904] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:37.662 [2024-07-15 15:54:04.480937] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.662 passed 00:12:37.662 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 15:54:04.566198] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.662 [2024-07-15 15:54:04.571168] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:37.662 [2024-07-15 15:54:04.571220] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:37.662 [2024-07-15 15:54:04.573254] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.921 passed 00:12:37.921 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 15:54:04.657841] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.921 [2024-07-15 15:54:04.746901] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:37.922 [2024-07-15 15:54:04.754888] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:37.922 [2024-07-15 15:54:04.762903] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:37.922 [2024-07-15 15:54:04.770890] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:37.922 [2024-07-15 15:54:04.799995] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.922 passed 00:12:38.191 Test: admin_create_io_sq_verify_pc ...[2024-07-15 15:54:04.887649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:38.191 [2024-07-15 15:54:04.902904] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:38.191 [2024-07-15 15:54:04.920203] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.191 passed 00:12:38.191 Test: admin_create_io_qp_max_qps ...[2024-07-15 15:54:05.003774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:39.567 [2024-07-15 15:54:06.109895] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:39.567 [2024-07-15 15:54:06.492043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:39.825 passed 00:12:39.825 Test: admin_create_io_sq_shared_cq ...[2024-07-15 15:54:06.580762] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:39.825 [2024-07-15 15:54:06.709903] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:39.825 [2024-07-15 15:54:06.746973] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:40.083 passed 00:12:40.083 00:12:40.083 Run Summary: Type Total Ran Passed Failed Inactive 00:12:40.083 suites 1 1 n/a 0 0 00:12:40.083 tests 18 18 18 0 0 00:12:40.083 asserts 360 360 360 0 n/a 00:12:40.083 00:12:40.083 Elapsed time = 1.576 seconds 00:12:40.083 15:54:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1105525 00:12:40.083 15:54:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1105525 ']' 00:12:40.083 15:54:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1105525 00:12:40.083 15:54:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:40.083 15:54:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:40.083 15:54:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1105525 00:12:40.083 15:54:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:40.083 15:54:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:40.083 15:54:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1105525' 00:12:40.083 killing process with pid 1105525 00:12:40.083 15:54:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1105525 00:12:40.083 15:54:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1105525 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:40.342 00:12:40.342 real 0m5.785s 00:12:40.342 user 0m16.196s 00:12:40.342 sys 0m0.516s 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:40.342 ************************************ 00:12:40.342 END TEST nvmf_vfio_user_nvme_compliance 00:12:40.342 ************************************ 00:12:40.342 15:54:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:40.342 15:54:07 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:40.342 15:54:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:40.342 15:54:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.342 15:54:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:40.342 ************************************ 00:12:40.342 START TEST nvmf_vfio_user_fuzz 00:12:40.342 ************************************ 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:40.342 * Looking for test storage... 00:12:40.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.342 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1106297 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1106297' 00:12:40.343 Process pid: 1106297 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1106297 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1106297 ']' 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.343 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:40.910 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.910 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:40.910 15:54:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.846 malloc0 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:41.846 15:54:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:13.899 Fuzzing completed. Shutting down the fuzz application 00:13:13.899 00:13:13.899 Dumping successful admin opcodes: 00:13:13.899 8, 9, 10, 24, 00:13:13.899 Dumping successful io opcodes: 00:13:13.899 0, 00:13:13.899 NS: 0x200003a1ef00 I/O qp, Total commands completed: 588795, total successful commands: 2275, random_seed: 1578500864 00:13:13.899 NS: 0x200003a1ef00 admin qp, Total commands completed: 75118, total successful commands: 587, random_seed: 4224583424 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1106297 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1106297 ']' 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1106297 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1106297 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1106297' 00:13:13.899 killing process with pid 1106297 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1106297 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1106297 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:13.899 00:13:13.899 real 0m32.351s 00:13:13.899 user 0m31.361s 00:13:13.899 sys 0m28.958s 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:13.899 15:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:13.899 ************************************ 00:13:13.899 END TEST nvmf_vfio_user_fuzz 00:13:13.899 ************************************ 00:13:13.899 15:54:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:13.899 15:54:39 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:13.899 15:54:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:13.899 15:54:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.899 15:54:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:13.899 ************************************ 00:13:13.899 START TEST nvmf_host_management 00:13:13.899 ************************************ 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:13.899 * Looking for test storage... 00:13:13.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.899 15:54:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.900 15:54:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:14.837 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.837 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.837 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.837 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:14.838 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:14.838 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:14.838 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:14.838 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:14.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:13:14.838 00:13:14.838 --- 10.0.0.2 ping statistics --- 00:13:14.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.838 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:13:14.838 00:13:14.838 --- 10.0.0.1 ping statistics --- 00:13:14.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.838 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:14.838 15:54:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:15.097 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1112235 00:13:15.097 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:15.097 15:54:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1112235 00:13:15.097 15:54:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1112235 ']' 00:13:15.097 15:54:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.097 15:54:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.097 15:54:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.097 15:54:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.097 15:54:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:15.097 [2024-07-15 15:54:41.817781] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:13:15.097 [2024-07-15 15:54:41.817887] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.097 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.097 [2024-07-15 15:54:41.886713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.097 [2024-07-15 15:54:42.005477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.097 [2024-07-15 15:54:42.005542] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.097 [2024-07-15 15:54:42.005568] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.097 [2024-07-15 15:54:42.005581] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.097 [2024-07-15 15:54:42.005592] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.097 [2024-07-15 15:54:42.005702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.097 [2024-07-15 15:54:42.005794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.097 [2024-07-15 15:54:42.005871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:15.097 [2024-07-15 15:54:42.005872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.035 [2024-07-15 15:54:42.786682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.035 Malloc0 00:13:16.035 [2024-07-15 15:54:42.851764] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1112410 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1112410 /var/tmp/bdevperf.sock 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1112410 ']' 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:16.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:16.035 { 00:13:16.035 "params": { 00:13:16.035 "name": "Nvme$subsystem", 00:13:16.035 "trtype": "$TEST_TRANSPORT", 00:13:16.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:16.035 "adrfam": "ipv4", 00:13:16.035 "trsvcid": "$NVMF_PORT", 00:13:16.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:16.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:16.035 "hdgst": ${hdgst:-false}, 00:13:16.035 "ddgst": ${ddgst:-false} 00:13:16.035 }, 00:13:16.035 "method": "bdev_nvme_attach_controller" 00:13:16.035 } 00:13:16.035 EOF 00:13:16.035 )") 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:16.035 15:54:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:16.035 "params": { 00:13:16.035 "name": "Nvme0", 00:13:16.035 "trtype": "tcp", 00:13:16.035 "traddr": "10.0.0.2", 00:13:16.035 "adrfam": "ipv4", 00:13:16.035 "trsvcid": "4420", 00:13:16.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:16.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:16.035 "hdgst": false, 00:13:16.035 "ddgst": false 00:13:16.035 }, 00:13:16.035 "method": "bdev_nvme_attach_controller" 00:13:16.035 }' 00:13:16.035 [2024-07-15 15:54:42.931356] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:13:16.035 [2024-07-15 15:54:42.931442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112410 ] 00:13:16.035 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.294 [2024-07-15 15:54:42.992134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.294 [2024-07-15 15:54:43.102246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.551 Running I/O for 10 seconds... 00:13:16.551 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.551 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:16.551 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:16.551 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.551 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.551 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:13:16.552 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.840 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.840 [2024-07-15 15:54:43.742900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.840 [2024-07-15 15:54:43.742966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.840 [2024-07-15 15:54:43.743000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.840 [2024-07-15 15:54:43.743017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.840 [2024-07-15 15:54:43.743035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.840 [2024-07-15 15:54:43.743050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.840 [2024-07-15 15:54:43.743066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.840 [2024-07-15 15:54:43.743081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.840 [2024-07-15 15:54:43.743097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.840 [2024-07-15 15:54:43.743112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.840 [2024-07-15 15:54:43.743129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.840 [2024-07-15 15:54:43.743145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.840 [2024-07-15 15:54:43.743163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.840 [2024-07-15 15:54:43.743183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.840 [2024-07-15 15:54:43.743201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.840 [2024-07-15 15:54:43.743217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.840 [2024-07-15 15:54:43.743234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.840 [2024-07-15 15:54:43.743250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.840 [2024-07-15 15:54:43.743267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.840 [2024-07-15 15:54:43.743283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.840 [2024-07-15 15:54:43.743300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.840 [2024-07-15 15:54:43.743315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.743979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.743996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.841 [2024-07-15 15:54:43.744726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.841 [2024-07-15 15:54:43.744743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.842 [2024-07-15 15:54:43.744759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.842 [2024-07-15 15:54:43.744776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.842 [2024-07-15 15:54:43.744792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.842 [2024-07-15 15:54:43.744809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.842 [2024-07-15 15:54:43.744825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.842 [2024-07-15 15:54:43.744842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.842 [2024-07-15 15:54:43.744869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.842 [2024-07-15 15:54:43.744891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.842 [2024-07-15 15:54:43.744909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.842 [2024-07-15 15:54:43.744926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.842 [2024-07-15 15:54:43.744941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.842 [2024-07-15 15:54:43.744959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.842 [2024-07-15 15:54:43.744975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.842 [2024-07-15 15:54:43.744992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.842 [2024-07-15 15:54:43.745008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.842 [2024-07-15 15:54:43.745025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.842 [2024-07-15 15:54:43.745041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.842 [2024-07-15 15:54:43.745059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.842 [2024-07-15 15:54:43.745075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.842 [2024-07-15 15:54:43.745092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.842 [2024-07-15 15:54:43.745111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.842 [2024-07-15 15:54:43.745129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.842 [2024-07-15 15:54:43.745145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.842 [2024-07-15 15:54:43.745172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2751900 is same with the state(5) to be set 00:13:16.842 [2024-07-15 15:54:43.745256] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2751900 was disconnected and freed. reset controller. 00:13:16.842 [2024-07-15 15:54:43.746475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:16.842 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.842 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:16.842 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.842 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.842 task offset: 73344 on job bdev=Nvme0n1 fails 00:13:16.842 00:13:16.842 Latency(us) 00:13:16.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.842 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:16.842 Job: Nvme0n1 ended in about 0.38 seconds with error 00:13:16.842 Verification LBA range: start 0x0 length 0x400 00:13:16.842 Nvme0n1 : 0.38 1355.12 84.70 169.39 0.00 40756.06 2924.85 37671.06 00:13:16.842 =================================================================================================================== 00:13:16.842 Total : 1355.12 84.70 169.39 0.00 40756.06 2924.85 37671.06 00:13:16.842 [2024-07-15 15:54:43.748381] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:16.842 [2024-07-15 15:54:43.748412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2340790 (9): Bad file descriptor 00:13:17.101 15:54:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.102 15:54:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:17.102 [2024-07-15 15:54:43.881076] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1112410 00:13:18.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1112410) - No such process 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:18.041 { 00:13:18.041 "params": { 00:13:18.041 "name": "Nvme$subsystem", 00:13:18.041 "trtype": "$TEST_TRANSPORT", 00:13:18.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:18.041 "adrfam": "ipv4", 00:13:18.041 "trsvcid": "$NVMF_PORT", 00:13:18.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:18.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:18.041 "hdgst": ${hdgst:-false}, 00:13:18.041 "ddgst": ${ddgst:-false} 00:13:18.041 }, 00:13:18.041 "method": "bdev_nvme_attach_controller" 00:13:18.041 } 00:13:18.041 EOF 00:13:18.041 )") 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:18.041 15:54:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:18.041 "params": { 00:13:18.041 "name": "Nvme0", 00:13:18.041 "trtype": "tcp", 00:13:18.041 "traddr": "10.0.0.2", 00:13:18.041 "adrfam": "ipv4", 00:13:18.041 "trsvcid": "4420", 00:13:18.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:18.041 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:18.041 "hdgst": false, 00:13:18.041 "ddgst": false 00:13:18.041 }, 00:13:18.041 "method": "bdev_nvme_attach_controller" 00:13:18.041 }' 00:13:18.041 [2024-07-15 15:54:44.800207] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:13:18.041 [2024-07-15 15:54:44.800314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112693 ] 00:13:18.041 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.041 [2024-07-15 15:54:44.860166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.041 [2024-07-15 15:54:44.967847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.610 Running I/O for 1 seconds... 00:13:19.547 00:13:19.547 Latency(us) 00:13:19.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.547 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:19.547 Verification LBA range: start 0x0 length 0x400 00:13:19.547 Nvme0n1 : 1.02 1506.31 94.14 0.00 0.00 41832.91 9369.22 34952.53 00:13:19.547 =================================================================================================================== 00:13:19.547 Total : 1506.31 94.14 0.00 0.00 41832.91 9369.22 34952.53 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:19.806 rmmod nvme_tcp 00:13:19.806 rmmod nvme_fabrics 00:13:19.806 rmmod nvme_keyring 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1112235 ']' 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1112235 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1112235 ']' 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1112235 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1112235 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1112235' 00:13:19.806 killing process with pid 1112235 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1112235 00:13:19.806 15:54:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1112235 00:13:20.064 [2024-07-15 15:54:46.917888] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:20.064 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.064 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:20.064 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:20.064 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.064 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:20.064 15:54:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.064 15:54:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.064 15:54:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.603 15:54:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:22.603 15:54:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:22.603 00:13:22.603 real 0m9.425s 00:13:22.603 user 0m23.000s 00:13:22.603 sys 0m2.659s 00:13:22.603 15:54:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.603 15:54:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:22.603 ************************************ 00:13:22.603 END TEST nvmf_host_management 00:13:22.603 ************************************ 00:13:22.603 15:54:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:22.603 15:54:49 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:22.603 15:54:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:22.603 15:54:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.603 15:54:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:22.603 ************************************ 00:13:22.603 START TEST nvmf_lvol 00:13:22.603 ************************************ 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:22.603 * Looking for test storage... 00:13:22.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:22.603 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:22.604 15:54:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:23.999 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.999 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:23.999 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:23.999 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:23.999 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:23.999 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:24.257 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:24.257 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.257 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:24.258 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:24.258 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.258 15:54:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:24.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:13:24.258 00:13:24.258 --- 10.0.0.2 ping statistics --- 00:13:24.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.258 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:13:24.258 00:13:24.258 --- 10.0.0.1 ping statistics --- 00:13:24.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.258 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1114768 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1114768 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1114768 ']' 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.258 15:54:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:24.258 [2024-07-15 15:54:51.140486] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:13:24.258 [2024-07-15 15:54:51.140577] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.258 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.517 [2024-07-15 15:54:51.211311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:24.517 [2024-07-15 15:54:51.330119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.517 [2024-07-15 15:54:51.330198] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.517 [2024-07-15 15:54:51.330214] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.517 [2024-07-15 15:54:51.330238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.517 [2024-07-15 15:54:51.330249] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.517 [2024-07-15 15:54:51.330334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.517 [2024-07-15 15:54:51.330418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.517 [2024-07-15 15:54:51.330401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.449 15:54:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:25.449 15:54:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:25.449 15:54:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:25.449 15:54:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:25.449 15:54:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:25.449 15:54:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.449 15:54:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:25.449 [2024-07-15 15:54:52.369658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.707 15:54:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:25.964 15:54:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:25.964 15:54:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:26.220 15:54:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:26.220 15:54:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:26.478 15:54:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:26.736 15:54:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7b82dee6-df91-46a7-84c1-814955918318 00:13:26.736 15:54:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7b82dee6-df91-46a7-84c1-814955918318 lvol 20 00:13:26.993 15:54:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a2a281a6-b1f7-4ed4-9c9f-47fbe9cf866c 00:13:26.993 15:54:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:27.251 15:54:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a2a281a6-b1f7-4ed4-9c9f-47fbe9cf866c 00:13:27.508 15:54:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:27.766 [2024-07-15 15:54:54.504165] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.766 15:54:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:28.024 15:54:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1115325 00:13:28.024 15:54:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:28.024 15:54:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:28.024 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.961 15:54:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a2a281a6-b1f7-4ed4-9c9f-47fbe9cf866c MY_SNAPSHOT 00:13:29.218 15:54:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0e8395cd-83e4-457d-90d5-470aaaca5c03 00:13:29.218 15:54:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a2a281a6-b1f7-4ed4-9c9f-47fbe9cf866c 30 00:13:29.476 15:54:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0e8395cd-83e4-457d-90d5-470aaaca5c03 MY_CLONE 00:13:30.043 15:54:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c4198a3d-92d0-4217-8026-6bf9319b50e0 00:13:30.043 15:54:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c4198a3d-92d0-4217-8026-6bf9319b50e0 00:13:30.610 15:54:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1115325 00:13:38.755 Initializing NVMe Controllers 00:13:38.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:38.755 Controller IO queue size 128, less than required. 00:13:38.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:38.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:38.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:38.755 Initialization complete. Launching workers. 00:13:38.755 ======================================================== 00:13:38.755 Latency(us) 00:13:38.755 Device Information : IOPS MiB/s Average min max 00:13:38.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10562.80 41.26 12119.23 2255.78 123258.78 00:13:38.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10397.50 40.62 12318.30 1995.34 51227.64 00:13:38.755 ======================================================== 00:13:38.755 Total : 20960.29 81.88 12217.98 1995.34 123258.78 00:13:38.755 00:13:38.755 15:55:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:38.755 15:55:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a2a281a6-b1f7-4ed4-9c9f-47fbe9cf866c 00:13:39.015 15:55:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7b82dee6-df91-46a7-84c1-814955918318 00:13:39.273 15:55:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:39.273 15:55:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:39.273 15:55:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:39.273 15:55:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:39.273 15:55:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:39.273 15:55:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:39.273 15:55:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:39.273 15:55:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.273 15:55:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:39.273 rmmod nvme_tcp 00:13:39.273 rmmod nvme_fabrics 00:13:39.273 rmmod nvme_keyring 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1114768 ']' 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1114768 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1114768 ']' 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1114768 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1114768 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1114768' 00:13:39.273 killing process with pid 1114768 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1114768 00:13:39.273 15:55:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1114768 00:13:39.532 15:55:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.532 15:55:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:39.532 15:55:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:39.532 15:55:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.532 15:55:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.532 15:55:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.532 15:55:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.532 15:55:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:42.064 00:13:42.064 real 0m19.388s 00:13:42.064 user 1m6.566s 00:13:42.064 sys 0m5.438s 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:42.064 ************************************ 00:13:42.064 END TEST nvmf_lvol 00:13:42.064 ************************************ 00:13:42.064 15:55:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:42.064 15:55:08 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:42.064 15:55:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:42.064 15:55:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.064 15:55:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:42.064 ************************************ 00:13:42.064 START TEST nvmf_lvs_grow 00:13:42.064 ************************************ 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:42.064 * Looking for test storage... 00:13:42.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.064 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:42.065 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:42.065 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:42.065 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.065 15:55:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.065 15:55:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.065 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:42.065 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:42.065 15:55:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:42.065 15:55:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.965 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:43.966 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:43.966 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:43.966 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:43.966 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:13:43.966 00:13:43.966 --- 10.0.0.2 ping statistics --- 00:13:43.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.966 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:13:43.966 00:13:43.966 --- 10.0.0.1 ping statistics --- 00:13:43.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.966 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1118593 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1118593 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1118593 ']' 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.966 15:55:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:43.966 [2024-07-15 15:55:10.626823] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:13:43.966 [2024-07-15 15:55:10.626904] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.966 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.966 [2024-07-15 15:55:10.691894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.966 [2024-07-15 15:55:10.807018] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.966 [2024-07-15 15:55:10.807082] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.966 [2024-07-15 15:55:10.807100] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.966 [2024-07-15 15:55:10.807114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.966 [2024-07-15 15:55:10.807125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.966 [2024-07-15 15:55:10.807166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.224 15:55:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.224 15:55:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:44.224 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.224 15:55:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:44.224 15:55:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:44.224 15:55:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.224 15:55:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:44.482 [2024-07-15 15:55:11.214873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:44.482 ************************************ 00:13:44.482 START TEST lvs_grow_clean 00:13:44.482 ************************************ 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:44.482 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:44.739 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:44.739 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:44.998 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c7a63439-1e88-42df-9e53-ce162eb92e15 00:13:44.998 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a63439-1e88-42df-9e53-ce162eb92e15 00:13:44.998 15:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:45.258 15:55:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:45.258 15:55:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:45.258 15:55:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c7a63439-1e88-42df-9e53-ce162eb92e15 lvol 150 00:13:45.517 15:55:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ebd15185-d4ad-4f25-aef0-548a07858c5b 00:13:45.517 15:55:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:45.517 15:55:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:45.775 [2024-07-15 15:55:12.546220] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:45.775 [2024-07-15 15:55:12.546308] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:45.775 true 00:13:45.776 15:55:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a63439-1e88-42df-9e53-ce162eb92e15 00:13:45.776 15:55:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:46.033 15:55:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:46.033 15:55:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:46.291 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ebd15185-d4ad-4f25-aef0-548a07858c5b 00:13:46.550 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:46.808 [2024-07-15 15:55:13.593478] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.808 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:47.066 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1119028 00:13:47.066 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:47.066 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:47.066 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1119028 /var/tmp/bdevperf.sock 00:13:47.066 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1119028 ']' 00:13:47.066 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.066 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.066 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.066 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.066 15:55:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:47.066 [2024-07-15 15:55:13.898918] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:13:47.066 [2024-07-15 15:55:13.898994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119028 ] 00:13:47.066 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.066 [2024-07-15 15:55:13.959749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.323 [2024-07-15 15:55:14.070142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.323 15:55:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.323 15:55:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:47.323 15:55:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:47.891 Nvme0n1 00:13:47.891 15:55:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:47.891 [ 00:13:47.891 { 00:13:47.891 "name": "Nvme0n1", 00:13:47.891 "aliases": [ 00:13:47.891 "ebd15185-d4ad-4f25-aef0-548a07858c5b" 00:13:47.891 ], 00:13:47.891 "product_name": "NVMe disk", 00:13:47.891 "block_size": 4096, 00:13:47.891 "num_blocks": 38912, 00:13:47.891 "uuid": "ebd15185-d4ad-4f25-aef0-548a07858c5b", 00:13:47.891 "assigned_rate_limits": { 00:13:47.892 "rw_ios_per_sec": 0, 00:13:47.892 "rw_mbytes_per_sec": 0, 00:13:47.892 "r_mbytes_per_sec": 0, 00:13:47.892 "w_mbytes_per_sec": 0 00:13:47.892 }, 00:13:47.892 "claimed": false, 00:13:47.892 "zoned": false, 00:13:47.892 "supported_io_types": { 00:13:47.892 "read": true, 00:13:47.892 "write": true, 00:13:47.892 "unmap": true, 00:13:47.892 "flush": true, 00:13:47.892 "reset": true, 00:13:47.892 "nvme_admin": true, 00:13:47.892 "nvme_io": true, 00:13:47.892 "nvme_io_md": false, 00:13:47.892 "write_zeroes": true, 00:13:47.892 "zcopy": false, 00:13:47.892 "get_zone_info": false, 00:13:47.892 "zone_management": false, 00:13:47.892 "zone_append": false, 00:13:47.892 "compare": true, 00:13:47.892 "compare_and_write": true, 00:13:47.892 "abort": true, 00:13:47.892 "seek_hole": false, 00:13:47.892 "seek_data": false, 00:13:47.892 "copy": true, 00:13:47.892 "nvme_iov_md": false 00:13:47.892 }, 00:13:47.892 "memory_domains": [ 00:13:47.892 { 00:13:47.892 "dma_device_id": "system", 00:13:47.892 "dma_device_type": 1 00:13:47.892 } 00:13:47.892 ], 00:13:47.892 "driver_specific": { 00:13:47.892 "nvme": [ 00:13:47.892 { 00:13:47.892 "trid": { 00:13:47.892 "trtype": "TCP", 00:13:47.892 "adrfam": "IPv4", 00:13:47.892 "traddr": "10.0.0.2", 00:13:47.892 "trsvcid": "4420", 00:13:47.892 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:47.892 }, 00:13:47.892 "ctrlr_data": { 00:13:47.892 "cntlid": 1, 00:13:47.892 "vendor_id": "0x8086", 00:13:47.892 "model_number": "SPDK bdev Controller", 00:13:47.892 "serial_number": "SPDK0", 00:13:47.892 "firmware_revision": "24.09", 00:13:47.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:47.892 "oacs": { 00:13:47.892 "security": 0, 00:13:47.892 "format": 0, 00:13:47.892 "firmware": 0, 00:13:47.892 "ns_manage": 0 00:13:47.892 }, 00:13:47.892 "multi_ctrlr": true, 00:13:47.892 "ana_reporting": false 00:13:47.892 }, 00:13:47.892 "vs": { 00:13:47.892 "nvme_version": "1.3" 00:13:47.892 }, 00:13:47.892 "ns_data": { 00:13:47.892 "id": 1, 00:13:47.892 "can_share": true 00:13:47.892 } 00:13:47.892 } 00:13:47.892 ], 00:13:47.892 "mp_policy": "active_passive" 00:13:47.892 } 00:13:47.892 } 00:13:47.892 ] 00:13:48.151 15:55:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1119053 00:13:48.151 15:55:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:48.151 15:55:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:48.151 Running I/O for 10 seconds... 00:13:49.087 Latency(us) 00:13:49.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.087 Nvme0n1 : 1.00 14600.00 57.03 0.00 0.00 0.00 0.00 0.00 00:13:49.087 =================================================================================================================== 00:13:49.087 Total : 14600.00 57.03 0.00 0.00 0.00 0.00 0.00 00:13:49.087 00:13:50.022 15:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c7a63439-1e88-42df-9e53-ce162eb92e15 00:13:50.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.022 Nvme0n1 : 2.00 14551.00 56.84 0.00 0.00 0.00 0.00 0.00 00:13:50.022 =================================================================================================================== 00:13:50.022 Total : 14551.00 56.84 0.00 0.00 0.00 0.00 0.00 00:13:50.022 00:13:50.303 true 00:13:50.303 15:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a63439-1e88-42df-9e53-ce162eb92e15 00:13:50.303 15:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:50.563 15:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:50.563 15:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:50.563 15:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1119053 00:13:51.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.129 Nvme0n1 : 3.00 14690.00 57.38 0.00 0.00 0.00 0.00 0.00 00:13:51.129 =================================================================================================================== 00:13:51.129 Total : 14690.00 57.38 0.00 0.00 0.00 0.00 0.00 00:13:51.129 00:13:52.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.095 Nvme0n1 : 4.00 14786.00 57.76 0.00 0.00 0.00 0.00 0.00 00:13:52.095 =================================================================================================================== 00:13:52.095 Total : 14786.00 57.76 0.00 0.00 0.00 0.00 0.00 00:13:52.095 00:13:53.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.032 Nvme0n1 : 5.00 14802.80 57.82 0.00 0.00 0.00 0.00 0.00 00:13:53.032 =================================================================================================================== 00:13:53.032 Total : 14802.80 57.82 0.00 0.00 0.00 0.00 0.00 00:13:53.032 00:13:54.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.411 Nvme0n1 : 6.00 14783.83 57.75 0.00 0.00 0.00 0.00 0.00 00:13:54.411 =================================================================================================================== 00:13:54.411 Total : 14783.83 57.75 0.00 0.00 0.00 0.00 0.00 00:13:54.411 00:13:55.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.350 Nvme0n1 : 7.00 14773.00 57.71 0.00 0.00 0.00 0.00 0.00 00:13:55.350 =================================================================================================================== 00:13:55.350 Total : 14773.00 57.71 0.00 0.00 0.00 0.00 0.00 00:13:55.350 00:13:56.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.288 Nvme0n1 : 8.00 14782.00 57.74 0.00 0.00 0.00 0.00 0.00 00:13:56.288 =================================================================================================================== 00:13:56.288 Total : 14782.00 57.74 0.00 0.00 0.00 0.00 0.00 00:13:56.288 00:13:57.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.225 Nvme0n1 : 9.00 14772.44 57.70 0.00 0.00 0.00 0.00 0.00 00:13:57.225 =================================================================================================================== 00:13:57.225 Total : 14772.44 57.70 0.00 0.00 0.00 0.00 0.00 00:13:57.225 00:13:58.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.164 Nvme0n1 : 10.00 14778.80 57.73 0.00 0.00 0.00 0.00 0.00 00:13:58.164 =================================================================================================================== 00:13:58.164 Total : 14778.80 57.73 0.00 0.00 0.00 0.00 0.00 00:13:58.164 00:13:58.164 00:13:58.164 Latency(us) 00:13:58.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.164 Nvme0n1 : 10.01 14778.66 57.73 0.00 0.00 8654.81 4878.79 16796.63 00:13:58.164 =================================================================================================================== 00:13:58.164 Total : 14778.66 57.73 0.00 0.00 8654.81 4878.79 16796.63 00:13:58.164 0 00:13:58.164 15:55:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1119028 00:13:58.164 15:55:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1119028 ']' 00:13:58.164 15:55:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1119028 00:13:58.164 15:55:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:58.164 15:55:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.164 15:55:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1119028 00:13:58.164 15:55:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:58.164 15:55:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:58.164 15:55:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1119028' 00:13:58.164 killing process with pid 1119028 00:13:58.164 15:55:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1119028 00:13:58.164 Received shutdown signal, test time was about 10.000000 seconds 00:13:58.164 00:13:58.164 Latency(us) 00:13:58.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.164 =================================================================================================================== 00:13:58.164 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:58.164 15:55:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1119028 00:13:58.438 15:55:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.696 15:55:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:59.261 15:55:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a63439-1e88-42df-9e53-ce162eb92e15 00:13:59.261 15:55:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:59.261 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:59.261 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:59.261 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:59.519 [2024-07-15 15:55:26.399755] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:59.519 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a63439-1e88-42df-9e53-ce162eb92e15 00:13:59.519 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:59.519 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a63439-1e88-42df-9e53-ce162eb92e15 00:13:59.519 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.519 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.519 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.519 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.519 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.519 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.519 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.519 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:59.519 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a63439-1e88-42df-9e53-ce162eb92e15 00:13:59.777 request: 00:13:59.777 { 00:13:59.777 "uuid": "c7a63439-1e88-42df-9e53-ce162eb92e15", 00:13:59.777 "method": "bdev_lvol_get_lvstores", 00:13:59.777 "req_id": 1 00:13:59.777 } 00:13:59.777 Got JSON-RPC error response 00:13:59.777 response: 00:13:59.777 { 00:13:59.777 "code": -19, 00:13:59.777 "message": "No such device" 00:13:59.777 } 00:13:59.777 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:59.777 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:59.777 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:59.777 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:59.777 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:00.035 aio_bdev 00:14:00.035 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ebd15185-d4ad-4f25-aef0-548a07858c5b 00:14:00.035 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=ebd15185-d4ad-4f25-aef0-548a07858c5b 00:14:00.035 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:00.035 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:00.035 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:00.035 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:00.035 15:55:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:00.294 15:55:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ebd15185-d4ad-4f25-aef0-548a07858c5b -t 2000 00:14:00.592 [ 00:14:00.592 { 00:14:00.592 "name": "ebd15185-d4ad-4f25-aef0-548a07858c5b", 00:14:00.592 "aliases": [ 00:14:00.592 "lvs/lvol" 00:14:00.592 ], 00:14:00.592 "product_name": "Logical Volume", 00:14:00.592 "block_size": 4096, 00:14:00.592 "num_blocks": 38912, 00:14:00.592 "uuid": "ebd15185-d4ad-4f25-aef0-548a07858c5b", 00:14:00.592 "assigned_rate_limits": { 00:14:00.592 "rw_ios_per_sec": 0, 00:14:00.592 "rw_mbytes_per_sec": 0, 00:14:00.592 "r_mbytes_per_sec": 0, 00:14:00.592 "w_mbytes_per_sec": 0 00:14:00.592 }, 00:14:00.592 "claimed": false, 00:14:00.592 "zoned": false, 00:14:00.592 "supported_io_types": { 00:14:00.592 "read": true, 00:14:00.592 "write": true, 00:14:00.592 "unmap": true, 00:14:00.592 "flush": false, 00:14:00.592 "reset": true, 00:14:00.592 "nvme_admin": false, 00:14:00.592 "nvme_io": false, 00:14:00.592 "nvme_io_md": false, 00:14:00.592 "write_zeroes": true, 00:14:00.592 "zcopy": false, 00:14:00.592 "get_zone_info": false, 00:14:00.592 "zone_management": false, 00:14:00.592 "zone_append": false, 00:14:00.592 "compare": false, 00:14:00.592 "compare_and_write": false, 00:14:00.592 "abort": false, 00:14:00.592 "seek_hole": true, 00:14:00.592 "seek_data": true, 00:14:00.592 "copy": false, 00:14:00.592 "nvme_iov_md": false 00:14:00.592 }, 00:14:00.592 "driver_specific": { 00:14:00.592 "lvol": { 00:14:00.592 "lvol_store_uuid": "c7a63439-1e88-42df-9e53-ce162eb92e15", 00:14:00.592 "base_bdev": "aio_bdev", 00:14:00.592 "thin_provision": false, 00:14:00.592 "num_allocated_clusters": 38, 00:14:00.592 "snapshot": false, 00:14:00.592 "clone": false, 00:14:00.592 "esnap_clone": false 00:14:00.592 } 00:14:00.592 } 00:14:00.592 } 00:14:00.592 ] 00:14:00.592 15:55:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:00.592 15:55:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a63439-1e88-42df-9e53-ce162eb92e15 00:14:00.592 15:55:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:00.849 15:55:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:00.849 15:55:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a63439-1e88-42df-9e53-ce162eb92e15 00:14:00.849 15:55:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:01.107 15:55:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:01.107 15:55:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ebd15185-d4ad-4f25-aef0-548a07858c5b 00:14:01.367 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c7a63439-1e88-42df-9e53-ce162eb92e15 00:14:01.624 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:01.880 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:01.880 00:14:01.880 real 0m17.504s 00:14:01.880 user 0m16.889s 00:14:01.880 sys 0m1.956s 00:14:01.880 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.880 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:01.880 ************************************ 00:14:01.880 END TEST lvs_grow_clean 00:14:01.881 ************************************ 00:14:01.881 15:55:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:01.881 15:55:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:01.881 15:55:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:01.881 15:55:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.881 15:55:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:02.138 ************************************ 00:14:02.138 START TEST lvs_grow_dirty 00:14:02.138 ************************************ 00:14:02.138 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:02.138 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:02.138 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:02.138 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:02.138 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:02.138 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:02.138 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:02.138 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:02.138 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:02.138 15:55:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:02.396 15:55:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:02.396 15:55:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:02.656 15:55:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d83522cc-4430-40e1-9c81-03af63d06503 00:14:02.656 15:55:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:02.656 15:55:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:02.915 15:55:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:02.915 15:55:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:02.915 15:55:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d83522cc-4430-40e1-9c81-03af63d06503 lvol 150 00:14:03.186 15:55:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=58afd123-65b3-464e-abd9-570bff4d0737 00:14:03.186 15:55:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:03.186 15:55:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:03.446 [2024-07-15 15:55:30.126057] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:03.446 [2024-07-15 15:55:30.126140] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:03.446 true 00:14:03.446 15:55:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:03.446 15:55:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:03.706 15:55:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:03.706 15:55:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:03.964 15:55:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 58afd123-65b3-464e-abd9-570bff4d0737 00:14:04.224 15:55:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:04.224 [2024-07-15 15:55:31.145145] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.482 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:04.482 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1121086 00:14:04.482 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:04.482 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:04.482 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1121086 /var/tmp/bdevperf.sock 00:14:04.482 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1121086 ']' 00:14:04.482 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:04.482 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.482 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:04.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:04.482 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.482 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:04.742 [2024-07-15 15:55:31.450776] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:04.742 [2024-07-15 15:55:31.450861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121086 ] 00:14:04.742 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.742 [2024-07-15 15:55:31.516367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.742 [2024-07-15 15:55:31.627753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.001 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.001 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:05.001 15:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:05.258 Nvme0n1 00:14:05.258 15:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:05.516 [ 00:14:05.516 { 00:14:05.516 "name": "Nvme0n1", 00:14:05.516 "aliases": [ 00:14:05.516 "58afd123-65b3-464e-abd9-570bff4d0737" 00:14:05.516 ], 00:14:05.516 "product_name": "NVMe disk", 00:14:05.516 "block_size": 4096, 00:14:05.517 "num_blocks": 38912, 00:14:05.517 "uuid": "58afd123-65b3-464e-abd9-570bff4d0737", 00:14:05.517 "assigned_rate_limits": { 00:14:05.517 "rw_ios_per_sec": 0, 00:14:05.517 "rw_mbytes_per_sec": 0, 00:14:05.517 "r_mbytes_per_sec": 0, 00:14:05.517 "w_mbytes_per_sec": 0 00:14:05.517 }, 00:14:05.517 "claimed": false, 00:14:05.517 "zoned": false, 00:14:05.517 "supported_io_types": { 00:14:05.517 "read": true, 00:14:05.517 "write": true, 00:14:05.517 "unmap": true, 00:14:05.517 "flush": true, 00:14:05.517 "reset": true, 00:14:05.517 "nvme_admin": true, 00:14:05.517 "nvme_io": true, 00:14:05.517 "nvme_io_md": false, 00:14:05.517 "write_zeroes": true, 00:14:05.517 "zcopy": false, 00:14:05.517 "get_zone_info": false, 00:14:05.517 "zone_management": false, 00:14:05.517 "zone_append": false, 00:14:05.517 "compare": true, 00:14:05.517 "compare_and_write": true, 00:14:05.517 "abort": true, 00:14:05.517 "seek_hole": false, 00:14:05.517 "seek_data": false, 00:14:05.517 "copy": true, 00:14:05.517 "nvme_iov_md": false 00:14:05.517 }, 00:14:05.517 "memory_domains": [ 00:14:05.517 { 00:14:05.517 "dma_device_id": "system", 00:14:05.517 "dma_device_type": 1 00:14:05.517 } 00:14:05.517 ], 00:14:05.517 "driver_specific": { 00:14:05.517 "nvme": [ 00:14:05.517 { 00:14:05.517 "trid": { 00:14:05.517 "trtype": "TCP", 00:14:05.517 "adrfam": "IPv4", 00:14:05.517 "traddr": "10.0.0.2", 00:14:05.517 "trsvcid": "4420", 00:14:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:05.517 }, 00:14:05.517 "ctrlr_data": { 00:14:05.517 "cntlid": 1, 00:14:05.517 "vendor_id": "0x8086", 00:14:05.517 "model_number": "SPDK bdev Controller", 00:14:05.517 "serial_number": "SPDK0", 00:14:05.517 "firmware_revision": "24.09", 00:14:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:05.517 "oacs": { 00:14:05.517 "security": 0, 00:14:05.517 "format": 0, 00:14:05.517 "firmware": 0, 00:14:05.517 "ns_manage": 0 00:14:05.517 }, 00:14:05.517 "multi_ctrlr": true, 00:14:05.517 "ana_reporting": false 00:14:05.517 }, 00:14:05.517 "vs": { 00:14:05.517 "nvme_version": "1.3" 00:14:05.517 }, 00:14:05.517 "ns_data": { 00:14:05.517 "id": 1, 00:14:05.517 "can_share": true 00:14:05.517 } 00:14:05.517 } 00:14:05.517 ], 00:14:05.517 "mp_policy": "active_passive" 00:14:05.517 } 00:14:05.517 } 00:14:05.517 ] 00:14:05.517 15:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1121222 00:14:05.517 15:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:05.517 15:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:05.776 Running I/O for 10 seconds... 00:14:06.714 Latency(us) 00:14:06.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.714 Nvme0n1 : 1.00 14535.00 56.78 0.00 0.00 0.00 0.00 0.00 00:14:06.714 =================================================================================================================== 00:14:06.714 Total : 14535.00 56.78 0.00 0.00 0.00 0.00 0.00 00:14:06.714 00:14:07.647 15:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:07.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.648 Nvme0n1 : 2.00 14634.50 57.17 0.00 0.00 0.00 0.00 0.00 00:14:07.648 =================================================================================================================== 00:14:07.648 Total : 14634.50 57.17 0.00 0.00 0.00 0.00 0.00 00:14:07.648 00:14:07.904 true 00:14:07.904 15:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:07.904 15:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:08.163 15:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:08.163 15:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:08.163 15:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1121222 00:14:08.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.729 Nvme0n1 : 3.00 14849.67 58.01 0.00 0.00 0.00 0.00 0.00 00:14:08.729 =================================================================================================================== 00:14:08.729 Total : 14849.67 58.01 0.00 0.00 0.00 0.00 0.00 00:14:08.729 00:14:09.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.662 Nvme0n1 : 4.00 14852.25 58.02 0.00 0.00 0.00 0.00 0.00 00:14:09.662 =================================================================================================================== 00:14:09.662 Total : 14852.25 58.02 0.00 0.00 0.00 0.00 0.00 00:14:09.662 00:14:11.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.042 Nvme0n1 : 5.00 14720.60 57.50 0.00 0.00 0.00 0.00 0.00 00:14:11.042 =================================================================================================================== 00:14:11.042 Total : 14720.60 57.50 0.00 0.00 0.00 0.00 0.00 00:14:11.042 00:14:11.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.979 Nvme0n1 : 6.00 14616.50 57.10 0.00 0.00 0.00 0.00 0.00 00:14:11.979 =================================================================================================================== 00:14:11.979 Total : 14616.50 57.10 0.00 0.00 0.00 0.00 0.00 00:14:11.979 00:14:12.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.940 Nvme0n1 : 7.00 14578.71 56.95 0.00 0.00 0.00 0.00 0.00 00:14:12.940 =================================================================================================================== 00:14:12.940 Total : 14578.71 56.95 0.00 0.00 0.00 0.00 0.00 00:14:12.940 00:14:13.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.878 Nvme0n1 : 8.00 14537.38 56.79 0.00 0.00 0.00 0.00 0.00 00:14:13.878 =================================================================================================================== 00:14:13.878 Total : 14537.38 56.79 0.00 0.00 0.00 0.00 0.00 00:14:13.878 00:14:14.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.814 Nvme0n1 : 9.00 14494.56 56.62 0.00 0.00 0.00 0.00 0.00 00:14:14.814 =================================================================================================================== 00:14:14.814 Total : 14494.56 56.62 0.00 0.00 0.00 0.00 0.00 00:14:14.814 00:14:15.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.750 Nvme0n1 : 10.00 14483.50 56.58 0.00 0.00 0.00 0.00 0.00 00:14:15.750 =================================================================================================================== 00:14:15.750 Total : 14483.50 56.58 0.00 0.00 0.00 0.00 0.00 00:14:15.750 00:14:15.750 00:14:15.750 Latency(us) 00:14:15.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.750 Nvme0n1 : 10.01 14483.51 56.58 0.00 0.00 8829.27 2682.12 15922.82 00:14:15.750 =================================================================================================================== 00:14:15.750 Total : 14483.51 56.58 0.00 0.00 8829.27 2682.12 15922.82 00:14:15.750 0 00:14:15.750 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1121086 00:14:15.750 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1121086 ']' 00:14:15.750 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1121086 00:14:15.750 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:15.750 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.750 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1121086 00:14:15.750 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:15.750 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:15.750 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1121086' 00:14:15.750 killing process with pid 1121086 00:14:15.750 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1121086 00:14:15.750 Received shutdown signal, test time was about 10.000000 seconds 00:14:15.750 00:14:15.750 Latency(us) 00:14:15.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.750 =================================================================================================================== 00:14:15.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:15.750 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1121086 00:14:16.007 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:16.265 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:16.831 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:16.831 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1118593 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1118593 00:14:16.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1118593 Killed "${NVMF_APP[@]}" "$@" 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1122550 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1122550 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1122550 ']' 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.832 15:55:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:17.092 [2024-07-15 15:55:43.782858] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:17.092 [2024-07-15 15:55:43.782974] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.092 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.092 [2024-07-15 15:55:43.850483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.092 [2024-07-15 15:55:43.962241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.092 [2024-07-15 15:55:43.962296] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.092 [2024-07-15 15:55:43.962309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.092 [2024-07-15 15:55:43.962320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.092 [2024-07-15 15:55:43.962329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.092 [2024-07-15 15:55:43.962357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.351 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.351 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:17.351 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.351 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.351 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:17.351 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.351 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:17.609 [2024-07-15 15:55:44.368977] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:17.609 [2024-07-15 15:55:44.369113] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:17.609 [2024-07-15 15:55:44.369185] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:17.609 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:17.609 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 58afd123-65b3-464e-abd9-570bff4d0737 00:14:17.609 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=58afd123-65b3-464e-abd9-570bff4d0737 00:14:17.609 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:17.609 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:17.609 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:17.609 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:17.609 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:17.869 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 58afd123-65b3-464e-abd9-570bff4d0737 -t 2000 00:14:18.128 [ 00:14:18.128 { 00:14:18.128 "name": "58afd123-65b3-464e-abd9-570bff4d0737", 00:14:18.128 "aliases": [ 00:14:18.128 "lvs/lvol" 00:14:18.128 ], 00:14:18.128 "product_name": "Logical Volume", 00:14:18.128 "block_size": 4096, 00:14:18.128 "num_blocks": 38912, 00:14:18.128 "uuid": "58afd123-65b3-464e-abd9-570bff4d0737", 00:14:18.128 "assigned_rate_limits": { 00:14:18.128 "rw_ios_per_sec": 0, 00:14:18.128 "rw_mbytes_per_sec": 0, 00:14:18.128 "r_mbytes_per_sec": 0, 00:14:18.128 "w_mbytes_per_sec": 0 00:14:18.128 }, 00:14:18.128 "claimed": false, 00:14:18.128 "zoned": false, 00:14:18.128 "supported_io_types": { 00:14:18.128 "read": true, 00:14:18.128 "write": true, 00:14:18.128 "unmap": true, 00:14:18.128 "flush": false, 00:14:18.128 "reset": true, 00:14:18.128 "nvme_admin": false, 00:14:18.128 "nvme_io": false, 00:14:18.128 "nvme_io_md": false, 00:14:18.128 "write_zeroes": true, 00:14:18.128 "zcopy": false, 00:14:18.128 "get_zone_info": false, 00:14:18.128 "zone_management": false, 00:14:18.128 "zone_append": false, 00:14:18.128 "compare": false, 00:14:18.128 "compare_and_write": false, 00:14:18.128 "abort": false, 00:14:18.128 "seek_hole": true, 00:14:18.128 "seek_data": true, 00:14:18.128 "copy": false, 00:14:18.128 "nvme_iov_md": false 00:14:18.128 }, 00:14:18.128 "driver_specific": { 00:14:18.128 "lvol": { 00:14:18.128 "lvol_store_uuid": "d83522cc-4430-40e1-9c81-03af63d06503", 00:14:18.128 "base_bdev": "aio_bdev", 00:14:18.128 "thin_provision": false, 00:14:18.128 "num_allocated_clusters": 38, 00:14:18.128 "snapshot": false, 00:14:18.128 "clone": false, 00:14:18.128 "esnap_clone": false 00:14:18.128 } 00:14:18.128 } 00:14:18.128 } 00:14:18.128 ] 00:14:18.128 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:18.128 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:18.128 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:18.386 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:18.386 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:18.386 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:18.644 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:18.644 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:18.904 [2024-07-15 15:55:45.625805] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:18.904 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:18.904 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:18.904 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:18.904 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:18.904 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.904 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:18.904 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.904 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:18.904 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.904 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:18.904 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:18.904 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:19.164 request: 00:14:19.164 { 00:14:19.164 "uuid": "d83522cc-4430-40e1-9c81-03af63d06503", 00:14:19.164 "method": "bdev_lvol_get_lvstores", 00:14:19.164 "req_id": 1 00:14:19.164 } 00:14:19.164 Got JSON-RPC error response 00:14:19.164 response: 00:14:19.164 { 00:14:19.164 "code": -19, 00:14:19.164 "message": "No such device" 00:14:19.164 } 00:14:19.164 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:19.164 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:19.164 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:19.164 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:19.164 15:55:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:19.424 aio_bdev 00:14:19.425 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 58afd123-65b3-464e-abd9-570bff4d0737 00:14:19.425 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=58afd123-65b3-464e-abd9-570bff4d0737 00:14:19.425 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:19.425 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:19.425 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:19.425 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:19.425 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:19.684 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 58afd123-65b3-464e-abd9-570bff4d0737 -t 2000 00:14:19.943 [ 00:14:19.943 { 00:14:19.943 "name": "58afd123-65b3-464e-abd9-570bff4d0737", 00:14:19.943 "aliases": [ 00:14:19.943 "lvs/lvol" 00:14:19.943 ], 00:14:19.943 "product_name": "Logical Volume", 00:14:19.943 "block_size": 4096, 00:14:19.943 "num_blocks": 38912, 00:14:19.943 "uuid": "58afd123-65b3-464e-abd9-570bff4d0737", 00:14:19.943 "assigned_rate_limits": { 00:14:19.943 "rw_ios_per_sec": 0, 00:14:19.943 "rw_mbytes_per_sec": 0, 00:14:19.943 "r_mbytes_per_sec": 0, 00:14:19.943 "w_mbytes_per_sec": 0 00:14:19.943 }, 00:14:19.943 "claimed": false, 00:14:19.943 "zoned": false, 00:14:19.943 "supported_io_types": { 00:14:19.943 "read": true, 00:14:19.943 "write": true, 00:14:19.943 "unmap": true, 00:14:19.943 "flush": false, 00:14:19.943 "reset": true, 00:14:19.943 "nvme_admin": false, 00:14:19.943 "nvme_io": false, 00:14:19.943 "nvme_io_md": false, 00:14:19.943 "write_zeroes": true, 00:14:19.943 "zcopy": false, 00:14:19.943 "get_zone_info": false, 00:14:19.943 "zone_management": false, 00:14:19.943 "zone_append": false, 00:14:19.943 "compare": false, 00:14:19.943 "compare_and_write": false, 00:14:19.943 "abort": false, 00:14:19.943 "seek_hole": true, 00:14:19.943 "seek_data": true, 00:14:19.943 "copy": false, 00:14:19.943 "nvme_iov_md": false 00:14:19.943 }, 00:14:19.943 "driver_specific": { 00:14:19.943 "lvol": { 00:14:19.943 "lvol_store_uuid": "d83522cc-4430-40e1-9c81-03af63d06503", 00:14:19.943 "base_bdev": "aio_bdev", 00:14:19.943 "thin_provision": false, 00:14:19.943 "num_allocated_clusters": 38, 00:14:19.943 "snapshot": false, 00:14:19.943 "clone": false, 00:14:19.943 "esnap_clone": false 00:14:19.943 } 00:14:19.943 } 00:14:19.943 } 00:14:19.943 ] 00:14:19.943 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:19.943 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:19.943 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:20.213 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:20.213 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:20.213 15:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:20.497 15:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:20.498 15:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 58afd123-65b3-464e-abd9-570bff4d0737 00:14:20.498 15:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d83522cc-4430-40e1-9c81-03af63d06503 00:14:20.755 15:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:21.322 15:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:21.322 00:14:21.322 real 0m19.157s 00:14:21.322 user 0m47.564s 00:14:21.322 sys 0m5.292s 00:14:21.322 15:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:21.322 15:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:21.322 ************************************ 00:14:21.322 END TEST lvs_grow_dirty 00:14:21.322 ************************************ 00:14:21.322 15:55:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:21.322 15:55:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:21.323 15:55:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:21.323 15:55:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:21.323 15:55:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:21.323 15:55:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:21.323 15:55:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:21.323 15:55:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:21.323 15:55:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:21.323 15:55:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:21.323 nvmf_trace.0 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.323 rmmod nvme_tcp 00:14:21.323 rmmod nvme_fabrics 00:14:21.323 rmmod nvme_keyring 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1122550 ']' 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1122550 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1122550 ']' 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1122550 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1122550 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1122550' 00:14:21.323 killing process with pid 1122550 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1122550 00:14:21.323 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1122550 00:14:21.581 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.581 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.581 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.581 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.581 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.581 15:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.581 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.581 15:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.118 15:55:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:24.118 00:14:24.118 real 0m41.953s 00:14:24.118 user 1m10.139s 00:14:24.118 sys 0m9.070s 00:14:24.118 15:55:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:24.118 15:55:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:24.118 ************************************ 00:14:24.118 END TEST nvmf_lvs_grow 00:14:24.118 ************************************ 00:14:24.118 15:55:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:24.118 15:55:50 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:24.118 15:55:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:24.118 15:55:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.118 15:55:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.118 ************************************ 00:14:24.118 START TEST nvmf_bdev_io_wait 00:14:24.118 ************************************ 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:24.118 * Looking for test storage... 00:14:24.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.118 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.119 15:55:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:26.023 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:26.023 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:26.023 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:26.023 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:14:26.023 00:14:26.023 --- 10.0.0.2 ping statistics --- 00:14:26.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.023 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:14:26.023 00:14:26.023 --- 10.0.0.1 ping statistics --- 00:14:26.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.023 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1125073 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1125073 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1125073 ']' 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.023 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.024 [2024-07-15 15:55:52.748579] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:26.024 [2024-07-15 15:55:52.748652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.024 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.024 [2024-07-15 15:55:52.811362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.024 [2024-07-15 15:55:52.918985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.024 [2024-07-15 15:55:52.919046] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.024 [2024-07-15 15:55:52.919059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.024 [2024-07-15 15:55:52.919070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.024 [2024-07-15 15:55:52.919079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.024 [2024-07-15 15:55:52.919131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.024 [2024-07-15 15:55:52.919188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.024 [2024-07-15 15:55:52.919254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.024 [2024-07-15 15:55:52.919256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.283 15:55:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.283 [2024-07-15 15:55:53.068470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.283 Malloc0 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.283 [2024-07-15 15:55:53.129664] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1125096 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1125097 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1125100 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.283 { 00:14:26.283 "params": { 00:14:26.283 "name": "Nvme$subsystem", 00:14:26.283 "trtype": "$TEST_TRANSPORT", 00:14:26.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.283 "adrfam": "ipv4", 00:14:26.283 "trsvcid": "$NVMF_PORT", 00:14:26.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.283 "hdgst": ${hdgst:-false}, 00:14:26.283 "ddgst": ${ddgst:-false} 00:14:26.283 }, 00:14:26.283 "method": "bdev_nvme_attach_controller" 00:14:26.283 } 00:14:26.283 EOF 00:14:26.283 )") 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1125102 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.283 { 00:14:26.283 "params": { 00:14:26.283 "name": "Nvme$subsystem", 00:14:26.283 "trtype": "$TEST_TRANSPORT", 00:14:26.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.283 "adrfam": "ipv4", 00:14:26.283 "trsvcid": "$NVMF_PORT", 00:14:26.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.283 "hdgst": ${hdgst:-false}, 00:14:26.283 "ddgst": ${ddgst:-false} 00:14:26.283 }, 00:14:26.283 "method": "bdev_nvme_attach_controller" 00:14:26.283 } 00:14:26.283 EOF 00:14:26.283 )") 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.283 { 00:14:26.283 "params": { 00:14:26.283 "name": "Nvme$subsystem", 00:14:26.283 "trtype": "$TEST_TRANSPORT", 00:14:26.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.283 "adrfam": "ipv4", 00:14:26.283 "trsvcid": "$NVMF_PORT", 00:14:26.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.283 "hdgst": ${hdgst:-false}, 00:14:26.283 "ddgst": ${ddgst:-false} 00:14:26.283 }, 00:14:26.283 "method": "bdev_nvme_attach_controller" 00:14:26.283 } 00:14:26.283 EOF 00:14:26.283 )") 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.283 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.283 { 00:14:26.283 "params": { 00:14:26.283 "name": "Nvme$subsystem", 00:14:26.284 "trtype": "$TEST_TRANSPORT", 00:14:26.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.284 "adrfam": "ipv4", 00:14:26.284 "trsvcid": "$NVMF_PORT", 00:14:26.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.284 "hdgst": ${hdgst:-false}, 00:14:26.284 "ddgst": ${ddgst:-false} 00:14:26.284 }, 00:14:26.284 "method": "bdev_nvme_attach_controller" 00:14:26.284 } 00:14:26.284 EOF 00:14:26.284 )") 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1125096 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.284 "params": { 00:14:26.284 "name": "Nvme1", 00:14:26.284 "trtype": "tcp", 00:14:26.284 "traddr": "10.0.0.2", 00:14:26.284 "adrfam": "ipv4", 00:14:26.284 "trsvcid": "4420", 00:14:26.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.284 "hdgst": false, 00:14:26.284 "ddgst": false 00:14:26.284 }, 00:14:26.284 "method": "bdev_nvme_attach_controller" 00:14:26.284 }' 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.284 "params": { 00:14:26.284 "name": "Nvme1", 00:14:26.284 "trtype": "tcp", 00:14:26.284 "traddr": "10.0.0.2", 00:14:26.284 "adrfam": "ipv4", 00:14:26.284 "trsvcid": "4420", 00:14:26.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.284 "hdgst": false, 00:14:26.284 "ddgst": false 00:14:26.284 }, 00:14:26.284 "method": "bdev_nvme_attach_controller" 00:14:26.284 }' 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.284 "params": { 00:14:26.284 "name": "Nvme1", 00:14:26.284 "trtype": "tcp", 00:14:26.284 "traddr": "10.0.0.2", 00:14:26.284 "adrfam": "ipv4", 00:14:26.284 "trsvcid": "4420", 00:14:26.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.284 "hdgst": false, 00:14:26.284 "ddgst": false 00:14:26.284 }, 00:14:26.284 "method": "bdev_nvme_attach_controller" 00:14:26.284 }' 00:14:26.284 15:55:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.284 "params": { 00:14:26.284 "name": "Nvme1", 00:14:26.284 "trtype": "tcp", 00:14:26.284 "traddr": "10.0.0.2", 00:14:26.284 "adrfam": "ipv4", 00:14:26.284 "trsvcid": "4420", 00:14:26.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.284 "hdgst": false, 00:14:26.284 "ddgst": false 00:14:26.284 }, 00:14:26.284 "method": "bdev_nvme_attach_controller" 00:14:26.284 }' 00:14:26.284 [2024-07-15 15:55:53.178974] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:26.284 [2024-07-15 15:55:53.178974] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:26.284 [2024-07-15 15:55:53.179005] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:26.284 [2024-07-15 15:55:53.179066] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 15:55:53.179066] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-07-15 15:55:53.179053] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:26.284 --proc-type=auto ] 00:14:26.284 --proc-type=auto ] 00:14:26.284 [2024-07-15 15:55:53.179079] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:26.284 [2024-07-15 15:55:53.179112] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:26.542 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.542 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.542 [2024-07-15 15:55:53.348742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.542 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.542 [2024-07-15 15:55:53.446202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:26.542 [2024-07-15 15:55:53.449071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.800 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.800 [2024-07-15 15:55:53.547707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:26.800 [2024-07-15 15:55:53.576389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.800 [2024-07-15 15:55:53.629681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.800 [2024-07-15 15:55:53.681478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:26.800 [2024-07-15 15:55:53.724468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:27.057 Running I/O for 1 seconds... 00:14:27.057 Running I/O for 1 seconds... 00:14:27.057 Running I/O for 1 seconds... 00:14:27.057 Running I/O for 1 seconds... 00:14:27.993 00:14:27.993 Latency(us) 00:14:27.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.993 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:27.993 Nvme1n1 : 1.02 6676.18 26.08 0.00 0.00 18880.06 7815.77 34564.17 00:14:27.993 =================================================================================================================== 00:14:27.993 Total : 6676.18 26.08 0.00 0.00 18880.06 7815.77 34564.17 00:14:27.993 00:14:27.993 Latency(us) 00:14:27.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.993 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:27.993 Nvme1n1 : 1.01 3566.35 13.93 0.00 0.00 35717.50 9514.86 243891.01 00:14:27.993 =================================================================================================================== 00:14:27.993 Total : 3566.35 13.93 0.00 0.00 35717.50 9514.86 243891.01 00:14:27.993 00:14:27.993 Latency(us) 00:14:27.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.993 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:27.993 Nvme1n1 : 1.00 191102.83 746.50 0.00 0.00 667.13 294.31 898.09 00:14:27.993 =================================================================================================================== 00:14:27.994 Total : 191102.83 746.50 0.00 0.00 667.13 294.31 898.09 00:14:28.253 00:14:28.253 Latency(us) 00:14:28.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.253 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:28.253 Nvme1n1 : 1.01 6783.08 26.50 0.00 0.00 18798.81 6602.15 44661.57 00:14:28.253 =================================================================================================================== 00:14:28.253 Total : 6783.08 26.50 0.00 0.00 18798.81 6602.15 44661.57 00:14:28.253 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1125097 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1125100 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1125102 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:28.511 rmmod nvme_tcp 00:14:28.511 rmmod nvme_fabrics 00:14:28.511 rmmod nvme_keyring 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1125073 ']' 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1125073 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1125073 ']' 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1125073 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1125073 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1125073' 00:14:28.511 killing process with pid 1125073 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1125073 00:14:28.511 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1125073 00:14:28.768 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:28.768 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:28.768 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:28.768 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.768 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:28.768 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.768 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.768 15:55:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.304 15:55:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.304 00:14:31.304 real 0m7.233s 00:14:31.304 user 0m16.369s 00:14:31.304 sys 0m3.377s 00:14:31.304 15:55:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.304 15:55:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:31.304 ************************************ 00:14:31.304 END TEST nvmf_bdev_io_wait 00:14:31.304 ************************************ 00:14:31.304 15:55:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:31.304 15:55:57 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:31.304 15:55:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:31.304 15:55:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.304 15:55:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.304 ************************************ 00:14:31.304 START TEST nvmf_queue_depth 00:14:31.304 ************************************ 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:31.304 * Looking for test storage... 00:14:31.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:31.304 15:55:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:33.209 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:33.209 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:33.209 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:33.209 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:33.209 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:33.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:14:33.210 00:14:33.210 --- 10.0.0.2 ping statistics --- 00:14:33.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.210 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:33.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:14:33.210 00:14:33.210 --- 10.0.0.1 ping statistics --- 00:14:33.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.210 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1127323 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1127323 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1127323 ']' 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.210 15:55:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:33.210 [2024-07-15 15:55:59.918152] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:33.210 [2024-07-15 15:55:59.918250] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.210 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.210 [2024-07-15 15:55:59.979816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.210 [2024-07-15 15:56:00.102669] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.210 [2024-07-15 15:56:00.102733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.210 [2024-07-15 15:56:00.102761] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.210 [2024-07-15 15:56:00.102772] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.210 [2024-07-15 15:56:00.102781] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.210 [2024-07-15 15:56:00.102808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.146 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:34.147 [2024-07-15 15:56:00.932276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:34.147 Malloc0 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:34.147 [2024-07-15 15:56:00.994648] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1127472 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1127472 /var/tmp/bdevperf.sock 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1127472 ']' 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:34.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.147 15:56:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:34.147 [2024-07-15 15:56:01.041450] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:34.147 [2024-07-15 15:56:01.041524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127472 ] 00:14:34.147 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.407 [2024-07-15 15:56:01.102615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.407 [2024-07-15 15:56:01.216479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.407 15:56:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.407 15:56:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:34.407 15:56:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:34.407 15:56:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.407 15:56:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:34.674 NVMe0n1 00:14:34.674 15:56:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.674 15:56:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:34.674 Running I/O for 10 seconds... 00:14:46.923 00:14:46.923 Latency(us) 00:14:46.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.923 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:46.923 Verification LBA range: start 0x0 length 0x4000 00:14:46.923 NVMe0n1 : 10.10 8488.61 33.16 0.00 0.00 120068.26 22427.88 75730.49 00:14:46.923 =================================================================================================================== 00:14:46.923 Total : 8488.61 33.16 0.00 0.00 120068.26 22427.88 75730.49 00:14:46.923 0 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1127472 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1127472 ']' 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1127472 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1127472 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1127472' 00:14:46.923 killing process with pid 1127472 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1127472 00:14:46.923 Received shutdown signal, test time was about 10.000000 seconds 00:14:46.923 00:14:46.923 Latency(us) 00:14:46.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.923 =================================================================================================================== 00:14:46.923 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1127472 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.923 rmmod nvme_tcp 00:14:46.923 rmmod nvme_fabrics 00:14:46.923 rmmod nvme_keyring 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1127323 ']' 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1127323 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1127323 ']' 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1127323 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.923 15:56:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1127323 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1127323' 00:14:46.923 killing process with pid 1127323 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1127323 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1127323 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.923 15:56:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.492 15:56:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:47.492 00:14:47.492 real 0m16.587s 00:14:47.492 user 0m23.493s 00:14:47.492 sys 0m2.919s 00:14:47.492 15:56:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:47.492 15:56:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:47.492 ************************************ 00:14:47.492 END TEST nvmf_queue_depth 00:14:47.492 ************************************ 00:14:47.492 15:56:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:47.492 15:56:14 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:47.492 15:56:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:47.492 15:56:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.492 15:56:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:47.492 ************************************ 00:14:47.492 START TEST nvmf_target_multipath 00:14:47.492 ************************************ 00:14:47.492 15:56:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:47.751 * Looking for test storage... 00:14:47.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.751 15:56:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:47.752 15:56:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:49.652 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:49.652 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:49.652 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:49.652 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:49.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:14:49.652 00:14:49.652 --- 10.0.0.2 ping statistics --- 00:14:49.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.652 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:14:49.652 00:14:49.652 --- 10.0.0.1 ping statistics --- 00:14:49.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.652 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:49.652 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:49.910 only one NIC for nvmf test 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.910 rmmod nvme_tcp 00:14:49.910 rmmod nvme_fabrics 00:14:49.910 rmmod nvme_keyring 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.910 15:56:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.817 00:14:51.817 real 0m4.314s 00:14:51.817 user 0m0.816s 00:14:51.817 sys 0m1.493s 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:51.817 15:56:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:51.817 ************************************ 00:14:51.817 END TEST nvmf_target_multipath 00:14:51.817 ************************************ 00:14:51.817 15:56:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:51.817 15:56:18 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:51.817 15:56:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:51.817 15:56:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.817 15:56:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:52.074 ************************************ 00:14:52.075 START TEST nvmf_zcopy 00:14:52.075 ************************************ 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:52.075 * Looking for test storage... 00:14:52.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:52.075 15:56:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:53.975 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:53.975 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:53.975 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:53.975 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:54.234 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:54.234 15:56:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:54.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:14:54.234 00:14:54.234 --- 10.0.0.2 ping statistics --- 00:14:54.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.234 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:14:54.234 00:14:54.234 --- 10.0.0.1 ping statistics --- 00:14:54.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.234 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:54.234 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1132646 00:14:54.235 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:54.235 15:56:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1132646 00:14:54.235 15:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1132646 ']' 00:14:54.235 15:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.235 15:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.235 15:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.235 15:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.235 15:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:54.235 [2024-07-15 15:56:21.107557] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:54.235 [2024-07-15 15:56:21.107638] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.235 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.494 [2024-07-15 15:56:21.170027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.494 [2024-07-15 15:56:21.286381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.494 [2024-07-15 15:56:21.286436] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.494 [2024-07-15 15:56:21.286459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.494 [2024-07-15 15:56:21.286473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.494 [2024-07-15 15:56:21.286485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.494 [2024-07-15 15:56:21.286514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:55.457 [2024-07-15 15:56:22.125566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:55.457 [2024-07-15 15:56:22.141782] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:55.457 malloc0 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:55.457 { 00:14:55.457 "params": { 00:14:55.457 "name": "Nvme$subsystem", 00:14:55.457 "trtype": "$TEST_TRANSPORT", 00:14:55.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:55.457 "adrfam": "ipv4", 00:14:55.457 "trsvcid": "$NVMF_PORT", 00:14:55.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:55.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:55.457 "hdgst": ${hdgst:-false}, 00:14:55.457 "ddgst": ${ddgst:-false} 00:14:55.457 }, 00:14:55.457 "method": "bdev_nvme_attach_controller" 00:14:55.457 } 00:14:55.457 EOF 00:14:55.457 )") 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:55.457 15:56:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:55.457 "params": { 00:14:55.457 "name": "Nvme1", 00:14:55.457 "trtype": "tcp", 00:14:55.457 "traddr": "10.0.0.2", 00:14:55.457 "adrfam": "ipv4", 00:14:55.457 "trsvcid": "4420", 00:14:55.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.457 "hdgst": false, 00:14:55.457 "ddgst": false 00:14:55.457 }, 00:14:55.457 "method": "bdev_nvme_attach_controller" 00:14:55.457 }' 00:14:55.457 [2024-07-15 15:56:22.225888] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:55.457 [2024-07-15 15:56:22.225979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132796 ] 00:14:55.457 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.457 [2024-07-15 15:56:22.295108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.715 [2024-07-15 15:56:22.416098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.715 Running I/O for 10 seconds... 00:15:07.941 00:15:07.941 Latency(us) 00:15:07.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.941 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:07.941 Verification LBA range: start 0x0 length 0x1000 00:15:07.941 Nvme1n1 : 10.01 5691.86 44.47 0.00 0.00 22425.10 1553.45 31651.46 00:15:07.941 =================================================================================================================== 00:15:07.941 Total : 5691.86 44.47 0.00 0.00 22425.10 1553.45 31651.46 00:15:07.941 15:56:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1133996 00:15:07.941 15:56:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:07.941 15:56:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:07.941 15:56:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:07.941 15:56:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:07.942 15:56:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:07.942 15:56:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:07.942 15:56:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:07.942 15:56:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:07.942 { 00:15:07.942 "params": { 00:15:07.942 "name": "Nvme$subsystem", 00:15:07.942 "trtype": "$TEST_TRANSPORT", 00:15:07.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:07.942 "adrfam": "ipv4", 00:15:07.942 "trsvcid": "$NVMF_PORT", 00:15:07.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:07.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:07.942 "hdgst": ${hdgst:-false}, 00:15:07.942 "ddgst": ${ddgst:-false} 00:15:07.942 }, 00:15:07.942 "method": "bdev_nvme_attach_controller" 00:15:07.942 } 00:15:07.942 EOF 00:15:07.942 )") 00:15:07.942 15:56:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:07.942 [2024-07-15 15:56:32.972704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:32.972749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 15:56:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:07.942 15:56:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:07.942 15:56:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:07.942 "params": { 00:15:07.942 "name": "Nvme1", 00:15:07.942 "trtype": "tcp", 00:15:07.942 "traddr": "10.0.0.2", 00:15:07.942 "adrfam": "ipv4", 00:15:07.942 "trsvcid": "4420", 00:15:07.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:07.942 "hdgst": false, 00:15:07.942 "ddgst": false 00:15:07.942 }, 00:15:07.942 "method": "bdev_nvme_attach_controller" 00:15:07.942 }' 00:15:07.942 [2024-07-15 15:56:32.980660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:32.980687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:32.988678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:32.988703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:32.996691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:32.996713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.004709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.004729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.010118] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:07.942 [2024-07-15 15:56:33.010194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133996 ] 00:15:07.942 [2024-07-15 15:56:33.012729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.012749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.020750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.020771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.028770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.028790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.036792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.036811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.942 [2024-07-15 15:56:33.044834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.044860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.052855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.052887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.060884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.060923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.068905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.068941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.072980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.942 [2024-07-15 15:56:33.076943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.076965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.084992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.085028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.092972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.092993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.100990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.101019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.109006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.109027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.117031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.117052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.125053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.125073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.133077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.133097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.141129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.141182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.149123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.149144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.157142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.157177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.165181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.165201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.173215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.173242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.181240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.181267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.189260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.189286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.190529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.942 [2024-07-15 15:56:33.197280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.197306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.205316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.205347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.213353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.213392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.221372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.221412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.229395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.229435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.237419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.237460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.245441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.245480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.253438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.253466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.261469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.261501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.269504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.269545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.277523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.277561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.285518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.285542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.293541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.293565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.301583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.301611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.309594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.309622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.317617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.317643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.325644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.942 [2024-07-15 15:56:33.325671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.942 [2024-07-15 15:56:33.333665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.333692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.341689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.341716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.349707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.349734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.357982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.358008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.365753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.365780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 Running I/O for 5 seconds... 00:15:07.943 [2024-07-15 15:56:33.373774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.373799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.388291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.388322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.399590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.399621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.411572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.411602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.423486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.423517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.434554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.434585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.446398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.446427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.457613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.457644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.468899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.468942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.480470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.480500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.491896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.491940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.503238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.503269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.514660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.514690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.526232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.526263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.537751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.537781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.549318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.549348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.560897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.560927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.572570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.572599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.583940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.583970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.595128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.595157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.606494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.606524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.617768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.617798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.629212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.629256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.641084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.641114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.652481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.652510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.663687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.663717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.676836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.676866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.687265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.687294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.698509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.698539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.711679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.711709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.722075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.722105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.733807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.733837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.745703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.745734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.756973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.757003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.768890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.768920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.780309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.780339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.791529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.791558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.802969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.802999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.814257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.814287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.826090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.826121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.836991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.837021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.848026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.848064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.859276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.859306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.870451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.870481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.881772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.881801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.892893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.892923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.904179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.904210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.915533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.915563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.926971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.927001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.938195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.938224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.949229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.949259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.960241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.960271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.971747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.943 [2024-07-15 15:56:33.971777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.943 [2024-07-15 15:56:33.983021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:33.983051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:33.994321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:33.994350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.005859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.005899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.016716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.016746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.027833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.027863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.039408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.039437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.050921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.050951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.061508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.061545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.072763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.072793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.084269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.084299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.095591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.095621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.106831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.106860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.117937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.117967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.128946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.128989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.141915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.141945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.152355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.152384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.164295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.164324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.175383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.175412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.186668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.186698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.198125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.198156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.209432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.209461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.220719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.220750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.234245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.234276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.244448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.244478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.256294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.256324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.267485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.267515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.278526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.278565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.289449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.289479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.302258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.302287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.312677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.312707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.323660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.323689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.336748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.336778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.347189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.347218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.358448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.358478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.370947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.370976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.380937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.380967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.392632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.392662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.403686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.403715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.415156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.415186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.426166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.426197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.437406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.437436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.448564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.448593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.460230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.460260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.471593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.471623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.482794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.482824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.494378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.494417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.505172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.505202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.516384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.516415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.529506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.529536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.539933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.539962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.551095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.551125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.564002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.564033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.574619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.574649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.585673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.585703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.597028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.597059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.608312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.608342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.619147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.619178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.630259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.630289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.944 [2024-07-15 15:56:34.643425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.944 [2024-07-15 15:56:34.643455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.654072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.654102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.665166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.665196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.676705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.676735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.688302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.688332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.699316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.699346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.710369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.710398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.721568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.721598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.732988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.733017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.743770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.743800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.754797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.754827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.766296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.766326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.777762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.777792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.790983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.791013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.800654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.800684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.812403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.812433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.823494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.823524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.834474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.834504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.845594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.845624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.856670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.856701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.945 [2024-07-15 15:56:34.868208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.945 [2024-07-15 15:56:34.868238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:34.879621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:34.879652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:34.890621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:34.890651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:34.902013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:34.902043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:34.913417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:34.913447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:34.924487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:34.924516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:34.935597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:34.935627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:34.946755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:34.946784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:34.957903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:34.957932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:34.970603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:34.970634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:34.980477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:34.980507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:34.992074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:34.992103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.005089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.005119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.015665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.015695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.026936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.026966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.038430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.038460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.049734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.049764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.062902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.062932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.073414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.073444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.084427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.084457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.095952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.095983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.107400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.107430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.118359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.118388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.129666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.129696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.142726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.142757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.223 [2024-07-15 15:56:35.152863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.223 [2024-07-15 15:56:35.152902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.482 [2024-07-15 15:56:35.164283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.482 [2024-07-15 15:56:35.164313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.482 [2024-07-15 15:56:35.175831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.482 [2024-07-15 15:56:35.175860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.187437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.187467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.199207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.199237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.210473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.210503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.222107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.222137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.233183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.233213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.244454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.244485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.255792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.255822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.267674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.267704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.279471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.279501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.290687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.290717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.302269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.302299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.315611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.315641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.326684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.326713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.338077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.338107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.349482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.349512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.361254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.361284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.372558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.372588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.385665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.385695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.396412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.396442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.483 [2024-07-15 15:56:35.408068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.483 [2024-07-15 15:56:35.408098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.419117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.419148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.430251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.430281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.441338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.441368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.454261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.454291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.464216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.464246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.475829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.475859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.486963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.486993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.498348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.498377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.509822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.509852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.520827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.520857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.534118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.534148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.545078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.545107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.556504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.556534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.567882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.567921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.579235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.579265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.591027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.591056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.602222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.602252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.613570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.613600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.624994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.625024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.636423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.636452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.647986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.648015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.659588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.743 [2024-07-15 15:56:35.659618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.743 [2024-07-15 15:56:35.671030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.744 [2024-07-15 15:56:35.671061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.029 [2024-07-15 15:56:35.682439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.029 [2024-07-15 15:56:35.682470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.029 [2024-07-15 15:56:35.693787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.029 [2024-07-15 15:56:35.693816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.029 [2024-07-15 15:56:35.705140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.029 [2024-07-15 15:56:35.705170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.029 [2024-07-15 15:56:35.718364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.029 [2024-07-15 15:56:35.718395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.029 [2024-07-15 15:56:35.728804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.029 [2024-07-15 15:56:35.728835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.739985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.740015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.751232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.751261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.762346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.762376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.773483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.773513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.784822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.784861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.795778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.795808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.806750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.806780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.817964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.817996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.829204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.829233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.840356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.840386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.851723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.851753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.864845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.864887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.875708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.875738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.886871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.886910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.897924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.897954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.909179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.909209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.922515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.922547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.030 [2024-07-15 15:56:35.933160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.030 [2024-07-15 15:56:35.933189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:35.944292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:35.944323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:35.957500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:35.957530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:35.967683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:35.967714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:35.978998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:35.979028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:35.991985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:35.992015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:36.002427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:36.002466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:36.014239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:36.014270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:36.025438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:36.025468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:36.036370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:36.036401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:36.048018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:36.048048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:36.059021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:36.059051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:36.070777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:36.070807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.288 [2024-07-15 15:56:36.082428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.288 [2024-07-15 15:56:36.082457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.289 [2024-07-15 15:56:36.093442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.289 [2024-07-15 15:56:36.093472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.289 [2024-07-15 15:56:36.104355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.289 [2024-07-15 15:56:36.104384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.289 [2024-07-15 15:56:36.117397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.289 [2024-07-15 15:56:36.117427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.289 [2024-07-15 15:56:36.128407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.289 [2024-07-15 15:56:36.128437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.289 [2024-07-15 15:56:36.139358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.289 [2024-07-15 15:56:36.139388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.289 [2024-07-15 15:56:36.150590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.289 [2024-07-15 15:56:36.150619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.289 [2024-07-15 15:56:36.162173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.289 [2024-07-15 15:56:36.162202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.289 [2024-07-15 15:56:36.174043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.289 [2024-07-15 15:56:36.174073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.289 [2024-07-15 15:56:36.185255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.289 [2024-07-15 15:56:36.185284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.289 [2024-07-15 15:56:36.196598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.289 [2024-07-15 15:56:36.196628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.289 [2024-07-15 15:56:36.208555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.289 [2024-07-15 15:56:36.208584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.221291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.221330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.230905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.230935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.242589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.242618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.253841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.253871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.265167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.265197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.278339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.278369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.288424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.288455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.300018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.300051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.311545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.311575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.322779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.322809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.336145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.336175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.347080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.347110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.358499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.358529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.369333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.369364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.380784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.380814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.392046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.392076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.404060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.404090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.416026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.416057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.427678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.427708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.440649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.440680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.451410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.451440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.462369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.462399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.548 [2024-07-15 15:56:36.475309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.548 [2024-07-15 15:56:36.475340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.485668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.485698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.496803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.496833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.508086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.508116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.519269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.519299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.530542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.530572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.541845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.541883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.553753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.553784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.565742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.565773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.577183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.577213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.588593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.588623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.599997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.600028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.611270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.611300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.623159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.623189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.634237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.634268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.647566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.647597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.658310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.658341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.669701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.669732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.681428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.681458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.693026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.693057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.704526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.704556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.716015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.716045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.808 [2024-07-15 15:56:36.727413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.808 [2024-07-15 15:56:36.727443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.068 [2024-07-15 15:56:36.738765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.068 [2024-07-15 15:56:36.738797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.068 [2024-07-15 15:56:36.751783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.068 [2024-07-15 15:56:36.751813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.068 [2024-07-15 15:56:36.762361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.068 [2024-07-15 15:56:36.762392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.068 [2024-07-15 15:56:36.773984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.068 [2024-07-15 15:56:36.774014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.068 [2024-07-15 15:56:36.785458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.068 [2024-07-15 15:56:36.785488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.068 [2024-07-15 15:56:36.798350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.068 [2024-07-15 15:56:36.798380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.068 [2024-07-15 15:56:36.808714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.068 [2024-07-15 15:56:36.808744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.068 [2024-07-15 15:56:36.820874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.068 [2024-07-15 15:56:36.820913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.832557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.832588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.844249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.844280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.857281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.857312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.867663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.867695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.878742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.878781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.890442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.890473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.902092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.902123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.913495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.913526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.925232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.925262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.936875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.936931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.948211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.948242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.959573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.959605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.970817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.970848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.982412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.982442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.069 [2024-07-15 15:56:36.994019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.069 [2024-07-15 15:56:36.994049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.005614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.005646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.017587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.017618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.029327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.029358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.040664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.040696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.052349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.052379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.063772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.063802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.077132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.077163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.087261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.087291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.098770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.098801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.110101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.110132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.122915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.122945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.133021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.133051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.144523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.144554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.155641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.155673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.166701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.166732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.179922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.179953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.190295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.190325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.201668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.201698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.213052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.213083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.224814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.224846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.235968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.235998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.328 [2024-07-15 15:56:37.247176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.328 [2024-07-15 15:56:37.247206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.586 [2024-07-15 15:56:37.258563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.586 [2024-07-15 15:56:37.258594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.586 [2024-07-15 15:56:37.269624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.586 [2024-07-15 15:56:37.269654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.586 [2024-07-15 15:56:37.281108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.586 [2024-07-15 15:56:37.281140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.586 [2024-07-15 15:56:37.292228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.586 [2024-07-15 15:56:37.292260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.586 [2024-07-15 15:56:37.303557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.586 [2024-07-15 15:56:37.303598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.586 [2024-07-15 15:56:37.314971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.586 [2024-07-15 15:56:37.315002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.586 [2024-07-15 15:56:37.328063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.586 [2024-07-15 15:56:37.328093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.586 [2024-07-15 15:56:37.338774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.586 [2024-07-15 15:56:37.338805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.586 [2024-07-15 15:56:37.350327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.586 [2024-07-15 15:56:37.350358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.586 [2024-07-15 15:56:37.361678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.361709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.373314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.373344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.384455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.384485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.395567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.395598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.407393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.407423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.418779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.418811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.430601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.430632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.441736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.441768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.454890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.454920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.465090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.465120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.476726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.476756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.487991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.488022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.498859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.498898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.587 [2024-07-15 15:56:37.512074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.587 [2024-07-15 15:56:37.512105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.522506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.522543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.533813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.533843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.544940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.544970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.556192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.556222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.568103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.568145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.581289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.581321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.591750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.591781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.603439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.603471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.614794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.614824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.625850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.625890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.637252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.637283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.648380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.648410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.659723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.659753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.671095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.671125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.682458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.682489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.695604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.695634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.845 [2024-07-15 15:56:37.705607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.845 [2024-07-15 15:56:37.705637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.846 [2024-07-15 15:56:37.716657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.846 [2024-07-15 15:56:37.716688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.846 [2024-07-15 15:56:37.727978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.846 [2024-07-15 15:56:37.728008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.846 [2024-07-15 15:56:37.739523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.846 [2024-07-15 15:56:37.739566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.846 [2024-07-15 15:56:37.750339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.846 [2024-07-15 15:56:37.750369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.846 [2024-07-15 15:56:37.761317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.846 [2024-07-15 15:56:37.761347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.846 [2024-07-15 15:56:37.772650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.846 [2024-07-15 15:56:37.772681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.784306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.784337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.795553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.795583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.806645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.806675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.818166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.818196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.829526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.829557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.841174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.841204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.852109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.852139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.863467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.863498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.874923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.874954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.888397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.888429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.899167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.899198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.910508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.910538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.923966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.924006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.934239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.934269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.946658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.946688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.958234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.958275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.973660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.973692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.984656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.984689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:37.996637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:37.996669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:38.008795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:38.008826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:38.020583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:38.020615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.104 [2024-07-15 15:56:38.032216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.104 [2024-07-15 15:56:38.032246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.043605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.043636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.054628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.054659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.066113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.066153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.077802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.077833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.091057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.091088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.101781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.101813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.112897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.112928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.124085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.124115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.135453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.135484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.146578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.146608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.158269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.158300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.169785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.169816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.181144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.181175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.192691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.192722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.204167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.204197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.216043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.216073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.227291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.227321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.239022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.239052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.250245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.250276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.263294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.263324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.273630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.273660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.364 [2024-07-15 15:56:38.285161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.364 [2024-07-15 15:56:38.285191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.296862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.296904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.308362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.308393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.319445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.319475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.330784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.330814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.342150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.342181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.353627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.353657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.365175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.365206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.376578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.376609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.387750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.387781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.394917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.394947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 00:15:11.623 Latency(us) 00:15:11.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.623 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:11.623 Nvme1n1 : 5.01 11227.31 87.71 0.00 0.00 11384.00 5267.15 23010.42 00:15:11.623 =================================================================================================================== 00:15:11.623 Total : 11227.31 87.71 0.00 0.00 11384.00 5267.15 23010.42 00:15:11.623 [2024-07-15 15:56:38.402936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.402964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.410948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.410977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.418972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.419002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.427039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.427089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.435058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.435106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.443074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.623 [2024-07-15 15:56:38.443123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.623 [2024-07-15 15:56:38.451096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.451144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.624 [2024-07-15 15:56:38.459119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.459165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.624 [2024-07-15 15:56:38.467151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.467199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.624 [2024-07-15 15:56:38.475163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.475211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.624 [2024-07-15 15:56:38.483184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.483231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.624 [2024-07-15 15:56:38.491205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.491254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.624 [2024-07-15 15:56:38.499233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.499282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.624 [2024-07-15 15:56:38.507265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.507315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.624 [2024-07-15 15:56:38.515280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.515327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.624 [2024-07-15 15:56:38.523294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.523341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.624 [2024-07-15 15:56:38.531316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.531364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.624 [2024-07-15 15:56:38.539343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.539392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.624 [2024-07-15 15:56:38.547337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.624 [2024-07-15 15:56:38.547369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.555344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.555371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.563364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.563389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.571385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.571411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.579418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.579446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.587476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.587520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.595500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.595548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.603522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.603566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.611499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.611527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.619519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.619544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.627541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.627566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.635565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.635590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.643603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.643635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.651665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.651715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.659679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.659726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.667656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.667691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.675675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.675701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 [2024-07-15 15:56:38.683700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.883 [2024-07-15 15:56:38.683725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1133996) - No such process 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1133996 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:11.884 delay0 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.884 15:56:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:11.884 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.143 [2024-07-15 15:56:38.848026] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:18.718 Initializing NVMe Controllers 00:15:18.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:18.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:18.718 Initialization complete. Launching workers. 00:15:18.718 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 146 00:15:18.718 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 422, failed to submit 44 00:15:18.718 success 272, unsuccess 150, failed 0 00:15:18.718 15:56:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:18.718 15:56:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:18.718 15:56:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:18.718 15:56:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:18.718 15:56:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:18.718 15:56:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:18.718 15:56:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:18.718 15:56:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:18.718 rmmod nvme_tcp 00:15:18.718 rmmod nvme_fabrics 00:15:18.718 rmmod nvme_keyring 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1132646 ']' 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1132646 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1132646 ']' 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1132646 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1132646 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1132646' 00:15:18.718 killing process with pid 1132646 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1132646 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1132646 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.718 15:56:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.629 15:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:20.629 00:15:20.629 real 0m28.679s 00:15:20.629 user 0m42.143s 00:15:20.629 sys 0m8.390s 00:15:20.629 15:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:20.629 15:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.629 ************************************ 00:15:20.629 END TEST nvmf_zcopy 00:15:20.629 ************************************ 00:15:20.629 15:56:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:20.629 15:56:47 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:20.629 15:56:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:20.629 15:56:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.629 15:56:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:20.629 ************************************ 00:15:20.629 START TEST nvmf_nmic 00:15:20.629 ************************************ 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:20.629 * Looking for test storage... 00:15:20.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:20.629 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.889 15:56:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.889 15:56:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.889 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:20.889 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:20.889 15:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:20.889 15:56:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:22.797 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:22.797 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:22.797 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:22.797 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.797 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:22.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:15:22.797 00:15:22.797 --- 10.0.0.2 ping statistics --- 00:15:22.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.798 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:15:22.798 00:15:22.798 --- 10.0.0.1 ping statistics --- 00:15:22.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.798 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1137371 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1137371 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1137371 ']' 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.798 15:56:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.056 [2024-07-15 15:56:49.771113] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:23.056 [2024-07-15 15:56:49.771223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.056 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.056 [2024-07-15 15:56:49.840820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.056 [2024-07-15 15:56:49.963302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.056 [2024-07-15 15:56:49.963381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.056 [2024-07-15 15:56:49.963398] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.056 [2024-07-15 15:56:49.963410] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.056 [2024-07-15 15:56:49.963422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.056 [2024-07-15 15:56:49.963509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.056 [2024-07-15 15:56:49.963566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.056 [2024-07-15 15:56:49.963629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.056 [2024-07-15 15:56:49.963631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.991 [2024-07-15 15:56:50.752814] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.991 Malloc0 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.991 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.992 [2024-07-15 15:56:50.806360] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:23.992 test case1: single bdev can't be used in multiple subsystems 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.992 [2024-07-15 15:56:50.830183] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:23.992 [2024-07-15 15:56:50.830212] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:23.992 [2024-07-15 15:56:50.830226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.992 request: 00:15:23.992 { 00:15:23.992 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:23.992 "namespace": { 00:15:23.992 "bdev_name": "Malloc0", 00:15:23.992 "no_auto_visible": false 00:15:23.992 }, 00:15:23.992 "method": "nvmf_subsystem_add_ns", 00:15:23.992 "req_id": 1 00:15:23.992 } 00:15:23.992 Got JSON-RPC error response 00:15:23.992 response: 00:15:23.992 { 00:15:23.992 "code": -32602, 00:15:23.992 "message": "Invalid parameters" 00:15:23.992 } 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:23.992 Adding namespace failed - expected result. 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:23.992 test case2: host connect to nvmf target in multiple paths 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.992 [2024-07-15 15:56:50.838303] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.992 15:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:24.558 15:56:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:25.494 15:56:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:25.494 15:56:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:25.494 15:56:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.494 15:56:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:25.494 15:56:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:27.398 15:56:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:27.398 15:56:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:27.398 15:56:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.398 15:56:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:27.398 15:56:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.398 15:56:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:27.398 15:56:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:27.398 [global] 00:15:27.398 thread=1 00:15:27.398 invalidate=1 00:15:27.398 rw=write 00:15:27.398 time_based=1 00:15:27.398 runtime=1 00:15:27.398 ioengine=libaio 00:15:27.398 direct=1 00:15:27.398 bs=4096 00:15:27.398 iodepth=1 00:15:27.398 norandommap=0 00:15:27.398 numjobs=1 00:15:27.398 00:15:27.398 verify_dump=1 00:15:27.398 verify_backlog=512 00:15:27.398 verify_state_save=0 00:15:27.398 do_verify=1 00:15:27.398 verify=crc32c-intel 00:15:27.398 [job0] 00:15:27.398 filename=/dev/nvme0n1 00:15:27.398 Could not set queue depth (nvme0n1) 00:15:27.398 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.398 fio-3.35 00:15:27.398 Starting 1 thread 00:15:28.805 00:15:28.805 job0: (groupid=0, jobs=1): err= 0: pid=1138021: Mon Jul 15 15:56:55 2024 00:15:28.805 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:15:28.805 slat (nsec): min=8852, max=34668, avg=24426.14, stdev=9299.12 00:15:28.805 clat (usec): min=40727, max=41201, avg=40958.44, stdev=95.61 00:15:28.805 lat (usec): min=40745, max=41218, avg=40982.87, stdev=94.63 00:15:28.805 clat percentiles (usec): 00:15:28.805 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:28.805 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:28.805 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:28.805 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:28.805 | 99.99th=[41157] 00:15:28.805 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:15:28.805 slat (nsec): min=8757, max=58591, avg=18631.22, stdev=6789.54 00:15:28.805 clat (usec): min=178, max=258, avg=218.27, stdev=17.74 00:15:28.805 lat (usec): min=188, max=296, avg=236.90, stdev=22.40 00:15:28.805 clat percentiles (usec): 00:15:28.805 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 202], 00:15:28.805 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:15:28.805 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 247], 00:15:28.805 | 99.00th=[ 258], 99.50th=[ 258], 99.90th=[ 260], 99.95th=[ 260], 00:15:28.805 | 99.99th=[ 260] 00:15:28.805 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:28.805 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:28.805 lat (usec) : 250=93.63%, 500=2.25% 00:15:28.805 lat (msec) : 50=4.12% 00:15:28.805 cpu : usr=0.68%, sys=1.17%, ctx=534, majf=0, minf=2 00:15:28.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:28.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.805 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:28.805 00:15:28.805 Run status group 0 (all jobs): 00:15:28.805 READ: bw=85.8KiB/s (87.8kB/s), 85.8KiB/s-85.8KiB/s (87.8kB/s-87.8kB/s), io=88.0KiB (90.1kB), run=1026-1026msec 00:15:28.805 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:15:28.805 00:15:28.805 Disk stats (read/write): 00:15:28.805 nvme0n1: ios=68/512, merge=0/0, ticks=764/106, in_queue=870, util=91.78% 00:15:28.805 15:56:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:28.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:28.805 15:56:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:28.805 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.806 rmmod nvme_tcp 00:15:28.806 rmmod nvme_fabrics 00:15:28.806 rmmod nvme_keyring 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1137371 ']' 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1137371 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1137371 ']' 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1137371 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1137371 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1137371' 00:15:28.806 killing process with pid 1137371 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1137371 00:15:28.806 15:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1137371 00:15:29.372 15:56:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.372 15:56:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.372 15:56:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.372 15:56:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.373 15:56:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.373 15:56:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.373 15:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.373 15:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.285 15:56:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:31.285 00:15:31.285 real 0m10.573s 00:15:31.285 user 0m25.095s 00:15:31.285 sys 0m2.335s 00:15:31.285 15:56:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:31.285 15:56:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:31.285 ************************************ 00:15:31.285 END TEST nvmf_nmic 00:15:31.285 ************************************ 00:15:31.285 15:56:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:31.285 15:56:58 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:31.285 15:56:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:31.285 15:56:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.285 15:56:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:31.285 ************************************ 00:15:31.285 START TEST nvmf_fio_target 00:15:31.285 ************************************ 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:31.285 * Looking for test storage... 00:15:31.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.285 15:56:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:31.286 15:56:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:33.816 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:33.816 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:33.816 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:33.816 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.816 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:33.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:15:33.817 00:15:33.817 --- 10.0.0.2 ping statistics --- 00:15:33.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.817 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:15:33.817 00:15:33.817 --- 10.0.0.1 ping statistics --- 00:15:33.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.817 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1140127 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1140127 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1140127 ']' 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.817 15:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.817 [2024-07-15 15:57:00.437373] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:33.817 [2024-07-15 15:57:00.437460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.817 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.817 [2024-07-15 15:57:00.509775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.817 [2024-07-15 15:57:00.622444] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.817 [2024-07-15 15:57:00.622505] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.817 [2024-07-15 15:57:00.622518] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.817 [2024-07-15 15:57:00.622529] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.817 [2024-07-15 15:57:00.622538] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.817 [2024-07-15 15:57:00.622653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.817 [2024-07-15 15:57:00.623924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.817 [2024-07-15 15:57:00.623982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.817 [2024-07-15 15:57:00.623979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.076 15:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.076 15:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:34.076 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:34.076 15:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:34.076 15:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.076 15:57:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.076 15:57:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:34.334 [2024-07-15 15:57:01.022426] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.334 15:57:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:34.592 15:57:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:34.592 15:57:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:34.849 15:57:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:34.849 15:57:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.107 15:57:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:35.107 15:57:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.366 15:57:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:35.366 15:57:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:35.624 15:57:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.882 15:57:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:35.882 15:57:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:36.140 15:57:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:36.140 15:57:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:36.398 15:57:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:36.398 15:57:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:36.655 15:57:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:36.912 15:57:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:36.912 15:57:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:37.169 15:57:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:37.169 15:57:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:37.426 15:57:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.685 [2024-07-15 15:57:04.470781] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.685 15:57:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:37.942 15:57:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:38.201 15:57:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:38.770 15:57:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:38.770 15:57:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:38.770 15:57:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.770 15:57:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:38.770 15:57:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:38.770 15:57:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:41.303 15:57:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:41.303 15:57:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:41.303 15:57:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:41.303 15:57:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:41.303 15:57:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:41.303 15:57:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:41.303 15:57:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:41.303 [global] 00:15:41.303 thread=1 00:15:41.303 invalidate=1 00:15:41.303 rw=write 00:15:41.303 time_based=1 00:15:41.303 runtime=1 00:15:41.303 ioengine=libaio 00:15:41.303 direct=1 00:15:41.303 bs=4096 00:15:41.303 iodepth=1 00:15:41.303 norandommap=0 00:15:41.303 numjobs=1 00:15:41.303 00:15:41.303 verify_dump=1 00:15:41.303 verify_backlog=512 00:15:41.303 verify_state_save=0 00:15:41.303 do_verify=1 00:15:41.303 verify=crc32c-intel 00:15:41.303 [job0] 00:15:41.303 filename=/dev/nvme0n1 00:15:41.303 [job1] 00:15:41.303 filename=/dev/nvme0n2 00:15:41.303 [job2] 00:15:41.303 filename=/dev/nvme0n3 00:15:41.303 [job3] 00:15:41.303 filename=/dev/nvme0n4 00:15:41.303 Could not set queue depth (nvme0n1) 00:15:41.303 Could not set queue depth (nvme0n2) 00:15:41.303 Could not set queue depth (nvme0n3) 00:15:41.303 Could not set queue depth (nvme0n4) 00:15:41.303 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.303 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.303 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.303 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.303 fio-3.35 00:15:41.303 Starting 4 threads 00:15:42.236 00:15:42.236 job0: (groupid=0, jobs=1): err= 0: pid=1141632: Mon Jul 15 15:57:09 2024 00:15:42.236 read: IOPS=23, BW=93.3KiB/s (95.5kB/s)(96.0KiB/1029msec) 00:15:42.236 slat (nsec): min=11087, max=41944, avg=18269.17, stdev=7470.56 00:15:42.236 clat (usec): min=419, max=42032, avg=36033.37, stdev=13731.57 00:15:42.236 lat (usec): min=445, max=42053, avg=36051.64, stdev=13730.45 00:15:42.236 clat percentiles (usec): 00:15:42.236 | 1.00th=[ 420], 5.00th=[ 424], 10.00th=[ 594], 20.00th=[40633], 00:15:42.236 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:42.236 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:15:42.236 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:42.236 | 99.99th=[42206] 00:15:42.236 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:15:42.236 slat (nsec): min=10104, max=47630, avg=22888.04, stdev=6394.04 00:15:42.236 clat (usec): min=224, max=531, avg=290.32, stdev=34.72 00:15:42.236 lat (usec): min=238, max=545, avg=313.21, stdev=35.35 00:15:42.236 clat percentiles (usec): 00:15:42.236 | 1.00th=[ 235], 5.00th=[ 249], 10.00th=[ 262], 20.00th=[ 269], 00:15:42.236 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:15:42.236 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 330], 95.00th=[ 359], 00:15:42.236 | 99.00th=[ 420], 99.50th=[ 429], 99.90th=[ 529], 99.95th=[ 529], 00:15:42.236 | 99.99th=[ 529] 00:15:42.236 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:15:42.236 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:42.236 lat (usec) : 250=4.85%, 500=90.86%, 750=0.37% 00:15:42.236 lat (msec) : 50=3.92% 00:15:42.236 cpu : usr=1.07%, sys=1.17%, ctx=537, majf=0, minf=1 00:15:42.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.236 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.236 job1: (groupid=0, jobs=1): err= 0: pid=1141642: Mon Jul 15 15:57:09 2024 00:15:42.237 read: IOPS=28, BW=113KiB/s (115kB/s)(116KiB/1031msec) 00:15:42.237 slat (nsec): min=7112, max=33990, avg=15936.48, stdev=5936.46 00:15:42.237 clat (usec): min=371, max=41496, avg=29805.14, stdev=18421.74 00:15:42.237 lat (usec): min=388, max=41512, avg=29821.08, stdev=18423.26 00:15:42.237 clat percentiles (usec): 00:15:42.237 | 1.00th=[ 371], 5.00th=[ 396], 10.00th=[ 433], 20.00th=[ 523], 00:15:42.237 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:42.237 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:42.237 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:42.237 | 99.99th=[41681] 00:15:42.237 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:15:42.237 slat (nsec): min=8929, max=69449, avg=21439.91, stdev=8959.02 00:15:42.237 clat (usec): min=228, max=463, avg=297.17, stdev=42.79 00:15:42.237 lat (usec): min=237, max=505, avg=318.61, stdev=45.06 00:15:42.237 clat percentiles (usec): 00:15:42.237 | 1.00th=[ 245], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:15:42.237 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:15:42.237 | 70.00th=[ 302], 80.00th=[ 322], 90.00th=[ 367], 95.00th=[ 388], 00:15:42.237 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 465], 99.95th=[ 465], 00:15:42.237 | 99.99th=[ 465] 00:15:42.237 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:15:42.237 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:42.237 lat (usec) : 250=2.59%, 500=92.98%, 750=0.55% 00:15:42.237 lat (msec) : 50=3.88% 00:15:42.237 cpu : usr=0.58%, sys=0.97%, ctx=542, majf=0, minf=1 00:15:42.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.237 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.237 job2: (groupid=0, jobs=1): err= 0: pid=1141644: Mon Jul 15 15:57:09 2024 00:15:42.237 read: IOPS=1168, BW=4675KiB/s (4788kB/s)(4680KiB/1001msec) 00:15:42.237 slat (nsec): min=5932, max=45959, avg=9734.48, stdev=4788.48 00:15:42.237 clat (usec): min=330, max=41128, avg=490.55, stdev=1189.93 00:15:42.237 lat (usec): min=339, max=41145, avg=500.28, stdev=1190.17 00:15:42.237 clat percentiles (usec): 00:15:42.237 | 1.00th=[ 359], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 424], 00:15:42.237 | 30.00th=[ 437], 40.00th=[ 445], 50.00th=[ 453], 60.00th=[ 465], 00:15:42.237 | 70.00th=[ 478], 80.00th=[ 498], 90.00th=[ 510], 95.00th=[ 523], 00:15:42.237 | 99.00th=[ 553], 99.50th=[ 603], 99.90th=[ 832], 99.95th=[41157], 00:15:42.237 | 99.99th=[41157] 00:15:42.237 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:42.237 slat (nsec): min=7612, max=67325, avg=15668.05, stdev=8285.78 00:15:42.237 clat (usec): min=193, max=3457, avg=248.52, stdev=95.04 00:15:42.237 lat (usec): min=201, max=3481, avg=264.19, stdev=97.32 00:15:42.237 clat percentiles (usec): 00:15:42.237 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 212], 00:15:42.237 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 243], 00:15:42.237 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 326], 00:15:42.237 | 99.00th=[ 412], 99.50th=[ 441], 99.90th=[ 930], 99.95th=[ 3458], 00:15:42.237 | 99.99th=[ 3458] 00:15:42.237 bw ( KiB/s): min= 5952, max= 5952, per=49.94%, avg=5952.00, stdev= 0.00, samples=1 00:15:42.237 iops : min= 1488, max= 1488, avg=1488.00, stdev= 0.00, samples=1 00:15:42.237 lat (usec) : 250=36.84%, 500=55.10%, 750=7.91%, 1000=0.07% 00:15:42.237 lat (msec) : 4=0.04%, 50=0.04% 00:15:42.237 cpu : usr=2.60%, sys=4.20%, ctx=2707, majf=0, minf=1 00:15:42.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.237 issued rwts: total=1170,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.237 job3: (groupid=0, jobs=1): err= 0: pid=1141645: Mon Jul 15 15:57:09 2024 00:15:42.237 read: IOPS=32, BW=132KiB/s (135kB/s)(132KiB/1002msec) 00:15:42.237 slat (nsec): min=7090, max=39807, avg=14656.45, stdev=8080.90 00:15:42.237 clat (usec): min=389, max=41145, avg=24737.52, stdev=19947.19 00:15:42.237 lat (usec): min=398, max=41154, avg=24752.18, stdev=19952.46 00:15:42.237 clat percentiles (usec): 00:15:42.237 | 1.00th=[ 392], 5.00th=[ 400], 10.00th=[ 445], 20.00th=[ 457], 00:15:42.237 | 30.00th=[ 469], 40.00th=[31851], 50.00th=[41157], 60.00th=[41157], 00:15:42.237 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:42.237 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:42.237 | 99.99th=[41157] 00:15:42.237 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:15:42.237 slat (nsec): min=8847, max=68965, avg=26971.19, stdev=13377.89 00:15:42.237 clat (usec): min=190, max=911, avg=327.31, stdev=100.25 00:15:42.237 lat (usec): min=200, max=935, avg=354.28, stdev=108.20 00:15:42.237 clat percentiles (usec): 00:15:42.237 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 221], 00:15:42.237 | 30.00th=[ 241], 40.00th=[ 289], 50.00th=[ 326], 60.00th=[ 371], 00:15:42.237 | 70.00th=[ 392], 80.00th=[ 412], 90.00th=[ 433], 95.00th=[ 453], 00:15:42.237 | 99.00th=[ 619], 99.50th=[ 775], 99.90th=[ 914], 99.95th=[ 914], 00:15:42.237 | 99.99th=[ 914] 00:15:42.237 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:15:42.237 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:42.237 lat (usec) : 250=29.54%, 500=64.77%, 750=1.28%, 1000=0.73% 00:15:42.237 lat (msec) : 50=3.67% 00:15:42.237 cpu : usr=0.80%, sys=1.20%, ctx=546, majf=0, minf=2 00:15:42.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.237 issued rwts: total=33,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.237 00:15:42.237 Run status group 0 (all jobs): 00:15:42.237 READ: bw=4873KiB/s (4990kB/s), 93.3KiB/s-4675KiB/s (95.5kB/s-4788kB/s), io=5024KiB (5145kB), run=1001-1031msec 00:15:42.237 WRITE: bw=11.6MiB/s (12.2MB/s), 1986KiB/s-6138KiB/s (2034kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1031msec 00:15:42.237 00:15:42.237 Disk stats (read/write): 00:15:42.237 nvme0n1: ios=71/512, merge=0/0, ticks=1026/135, in_queue=1161, util=97.70% 00:15:42.237 nvme0n2: ios=49/512, merge=0/0, ticks=1644/151, in_queue=1795, util=97.76% 00:15:42.237 nvme0n3: ios=1082/1143, merge=0/0, ticks=1423/283, in_queue=1706, util=98.00% 00:15:42.237 nvme0n4: ios=85/512, merge=0/0, ticks=1363/158, in_queue=1521, util=97.67% 00:15:42.237 15:57:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:42.495 [global] 00:15:42.495 thread=1 00:15:42.495 invalidate=1 00:15:42.495 rw=randwrite 00:15:42.495 time_based=1 00:15:42.495 runtime=1 00:15:42.495 ioengine=libaio 00:15:42.495 direct=1 00:15:42.495 bs=4096 00:15:42.495 iodepth=1 00:15:42.495 norandommap=0 00:15:42.495 numjobs=1 00:15:42.495 00:15:42.495 verify_dump=1 00:15:42.495 verify_backlog=512 00:15:42.495 verify_state_save=0 00:15:42.495 do_verify=1 00:15:42.495 verify=crc32c-intel 00:15:42.495 [job0] 00:15:42.495 filename=/dev/nvme0n1 00:15:42.495 [job1] 00:15:42.495 filename=/dev/nvme0n2 00:15:42.495 [job2] 00:15:42.495 filename=/dev/nvme0n3 00:15:42.496 [job3] 00:15:42.496 filename=/dev/nvme0n4 00:15:42.496 Could not set queue depth (nvme0n1) 00:15:42.496 Could not set queue depth (nvme0n2) 00:15:42.496 Could not set queue depth (nvme0n3) 00:15:42.496 Could not set queue depth (nvme0n4) 00:15:42.496 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.496 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.496 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.496 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.496 fio-3.35 00:15:42.496 Starting 4 threads 00:15:43.905 00:15:43.905 job0: (groupid=0, jobs=1): err= 0: pid=1142019: Mon Jul 15 15:57:10 2024 00:15:43.905 read: IOPS=22, BW=89.1KiB/s (91.2kB/s)(92.0KiB/1033msec) 00:15:43.905 slat (nsec): min=13222, max=35892, avg=26425.43, stdev=9550.23 00:15:43.905 clat (usec): min=532, max=41384, avg=39213.71, stdev=8433.14 00:15:43.905 lat (usec): min=567, max=41418, avg=39240.14, stdev=8431.31 00:15:43.905 clat percentiles (usec): 00:15:43.905 | 1.00th=[ 529], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:43.905 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:43.905 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:43.905 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:43.905 | 99.99th=[41157] 00:15:43.905 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:15:43.905 slat (nsec): min=7791, max=47356, avg=17323.54, stdev=7573.02 00:15:43.905 clat (usec): min=186, max=286, avg=230.86, stdev=17.55 00:15:43.905 lat (usec): min=199, max=300, avg=248.19, stdev=20.16 00:15:43.905 clat percentiles (usec): 00:15:43.905 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:15:43.905 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 237], 00:15:43.905 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 251], 95.00th=[ 260], 00:15:43.905 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 285], 00:15:43.905 | 99.99th=[ 285] 00:15:43.905 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:15:43.905 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:43.905 lat (usec) : 250=83.74%, 500=11.96%, 750=0.19% 00:15:43.905 lat (msec) : 50=4.11% 00:15:43.905 cpu : usr=0.68%, sys=1.16%, ctx=536, majf=0, minf=1 00:15:43.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.905 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.905 job1: (groupid=0, jobs=1): err= 0: pid=1142020: Mon Jul 15 15:57:10 2024 00:15:43.905 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:15:43.905 slat (nsec): min=12466, max=35677, avg=24394.00, stdev=9487.35 00:15:43.905 clat (usec): min=427, max=42043, avg=38969.67, stdev=9526.87 00:15:43.905 lat (usec): min=444, max=42056, avg=38994.06, stdev=9529.70 00:15:43.905 clat percentiles (usec): 00:15:43.905 | 1.00th=[ 429], 5.00th=[22676], 10.00th=[40633], 20.00th=[41157], 00:15:43.905 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:15:43.905 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:43.905 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:43.905 | 99.99th=[42206] 00:15:43.905 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:15:43.905 slat (nsec): min=7542, max=57401, avg=19274.83, stdev=8779.29 00:15:43.905 clat (usec): min=191, max=3249, avg=272.28, stdev=141.32 00:15:43.905 lat (usec): min=206, max=3271, avg=291.56, stdev=142.45 00:15:43.905 clat percentiles (usec): 00:15:43.905 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 229], 00:15:43.905 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 269], 00:15:43.905 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 388], 00:15:43.905 | 99.00th=[ 474], 99.50th=[ 486], 99.90th=[ 3261], 99.95th=[ 3261], 00:15:43.905 | 99.99th=[ 3261] 00:15:43.905 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:15:43.905 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:43.905 lat (usec) : 250=40.45%, 500=55.24%, 750=0.19% 00:15:43.905 lat (msec) : 4=0.19%, 50=3.93% 00:15:43.905 cpu : usr=1.09%, sys=0.89%, ctx=535, majf=0, minf=2 00:15:43.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.906 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.906 job2: (groupid=0, jobs=1): err= 0: pid=1142021: Mon Jul 15 15:57:10 2024 00:15:43.906 read: IOPS=19, BW=76.9KiB/s (78.8kB/s)(80.0KiB/1040msec) 00:15:43.906 slat (nsec): min=9109, max=37014, avg=25378.70, stdev=10985.81 00:15:43.906 clat (usec): min=40904, max=42013, avg=41518.23, stdev=514.23 00:15:43.906 lat (usec): min=40939, max=42027, avg=41543.61, stdev=514.80 00:15:43.906 clat percentiles (usec): 00:15:43.906 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:43.906 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:15:43.906 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:43.906 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:43.906 | 99.99th=[42206] 00:15:43.906 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:15:43.906 slat (nsec): min=8444, max=72352, avg=25072.73, stdev=11376.12 00:15:43.906 clat (usec): min=242, max=1386, avg=375.29, stdev=112.65 00:15:43.906 lat (usec): min=256, max=1413, avg=400.36, stdev=116.76 00:15:43.906 clat percentiles (usec): 00:15:43.906 | 1.00th=[ 251], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 293], 00:15:43.906 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 347], 60.00th=[ 388], 00:15:43.906 | 70.00th=[ 420], 80.00th=[ 445], 90.00th=[ 482], 95.00th=[ 510], 00:15:43.906 | 99.00th=[ 701], 99.50th=[ 1237], 99.90th=[ 1385], 99.95th=[ 1385], 00:15:43.906 | 99.99th=[ 1385] 00:15:43.906 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:15:43.906 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:43.906 lat (usec) : 250=0.94%, 500=88.72%, 750=5.64%, 1000=0.38% 00:15:43.906 lat (msec) : 2=0.56%, 50=3.76% 00:15:43.906 cpu : usr=0.77%, sys=1.73%, ctx=532, majf=0, minf=1 00:15:43.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.906 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.906 job3: (groupid=0, jobs=1): err= 0: pid=1142022: Mon Jul 15 15:57:10 2024 00:15:43.906 read: IOPS=1011, BW=4048KiB/s (4145kB/s)(4080KiB/1008msec) 00:15:43.906 slat (nsec): min=6291, max=48850, avg=13980.89, stdev=5248.42 00:15:43.906 clat (usec): min=289, max=43058, avg=707.75, stdev=3837.64 00:15:43.906 lat (usec): min=298, max=43075, avg=721.73, stdev=3838.83 00:15:43.906 clat percentiles (usec): 00:15:43.906 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 314], 00:15:43.906 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 347], 00:15:43.906 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 383], 00:15:43.906 | 99.00th=[ 1004], 99.50th=[41157], 99.90th=[42206], 99.95th=[43254], 00:15:43.906 | 99.99th=[43254] 00:15:43.906 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:15:43.906 slat (nsec): min=5854, max=63420, avg=14896.94, stdev=7993.51 00:15:43.906 clat (usec): min=179, max=4108, avg=241.45, stdev=129.68 00:15:43.906 lat (usec): min=187, max=4129, avg=256.35, stdev=131.14 00:15:43.906 clat percentiles (usec): 00:15:43.906 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:15:43.906 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 229], 60.00th=[ 241], 00:15:43.906 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 310], 00:15:43.906 | 99.00th=[ 396], 99.50th=[ 412], 99.90th=[ 799], 99.95th=[ 4113], 00:15:43.906 | 99.99th=[ 4113] 00:15:43.906 bw ( KiB/s): min= 8192, max= 8192, per=83.20%, avg=8192.00, stdev= 0.00, samples=1 00:15:43.906 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:43.906 lat (usec) : 250=32.24%, 500=66.19%, 750=0.83%, 1000=0.15% 00:15:43.906 lat (msec) : 2=0.05%, 4=0.05%, 10=0.05%, 50=0.44% 00:15:43.906 cpu : usr=1.69%, sys=4.47%, ctx=2044, majf=0, minf=1 00:15:43.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.906 issued rwts: total=1020,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.906 00:15:43.906 Run status group 0 (all jobs): 00:15:43.906 READ: bw=4173KiB/s (4273kB/s), 76.9KiB/s-4048KiB/s (78.8kB/s-4145kB/s), io=4340KiB (4444kB), run=1008-1040msec 00:15:43.906 WRITE: bw=9846KiB/s (10.1MB/s), 1969KiB/s-4063KiB/s (2016kB/s-4161kB/s), io=10.0MiB (10.5MB), run=1008-1040msec 00:15:43.906 00:15:43.906 Disk stats (read/write): 00:15:43.906 nvme0n1: ios=61/512, merge=0/0, ticks=1097/112, in_queue=1209, util=99.80% 00:15:43.906 nvme0n2: ios=67/512, merge=0/0, ticks=966/124, in_queue=1090, util=98.17% 00:15:43.906 nvme0n3: ios=36/512, merge=0/0, ticks=807/166, in_queue=973, util=91.02% 00:15:43.906 nvme0n4: ios=855/1024, merge=0/0, ticks=1032/234, in_queue=1266, util=91.16% 00:15:43.906 15:57:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:43.906 [global] 00:15:43.906 thread=1 00:15:43.906 invalidate=1 00:15:43.906 rw=write 00:15:43.906 time_based=1 00:15:43.906 runtime=1 00:15:43.906 ioengine=libaio 00:15:43.906 direct=1 00:15:43.906 bs=4096 00:15:43.906 iodepth=128 00:15:43.906 norandommap=0 00:15:43.906 numjobs=1 00:15:43.906 00:15:43.906 verify_dump=1 00:15:43.906 verify_backlog=512 00:15:43.906 verify_state_save=0 00:15:43.906 do_verify=1 00:15:43.906 verify=crc32c-intel 00:15:43.906 [job0] 00:15:43.906 filename=/dev/nvme0n1 00:15:43.906 [job1] 00:15:43.906 filename=/dev/nvme0n2 00:15:43.906 [job2] 00:15:43.906 filename=/dev/nvme0n3 00:15:43.906 [job3] 00:15:43.906 filename=/dev/nvme0n4 00:15:43.906 Could not set queue depth (nvme0n1) 00:15:43.906 Could not set queue depth (nvme0n2) 00:15:43.906 Could not set queue depth (nvme0n3) 00:15:43.906 Could not set queue depth (nvme0n4) 00:15:44.164 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:44.164 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:44.164 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:44.164 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:44.164 fio-3.35 00:15:44.164 Starting 4 threads 00:15:45.550 00:15:45.550 job0: (groupid=0, jobs=1): err= 0: pid=1142250: Mon Jul 15 15:57:12 2024 00:15:45.550 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:15:45.550 slat (usec): min=2, max=9947, avg=92.93, stdev=620.80 00:15:45.550 clat (usec): min=4656, max=26229, avg=12494.52, stdev=3449.80 00:15:45.550 lat (usec): min=4668, max=26244, avg=12587.45, stdev=3475.50 00:15:45.550 clat percentiles (usec): 00:15:45.550 | 1.00th=[ 7504], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[10028], 00:15:45.550 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11600], 60.00th=[12256], 00:15:45.550 | 70.00th=[12911], 80.00th=[14484], 90.00th=[17433], 95.00th=[20055], 00:15:45.550 | 99.00th=[23987], 99.50th=[25035], 99.90th=[26084], 99.95th=[26084], 00:15:45.550 | 99.99th=[26346] 00:15:45.550 write: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(21.8MiB/1002msec); 0 zone resets 00:15:45.550 slat (usec): min=3, max=9952, avg=82.06, stdev=498.76 00:15:45.550 clat (usec): min=300, max=33171, avg=11291.51, stdev=4552.84 00:15:45.550 lat (usec): min=1517, max=33211, avg=11373.58, stdev=4571.32 00:15:45.550 clat percentiles (usec): 00:15:45.550 | 1.00th=[ 3884], 5.00th=[ 5669], 10.00th=[ 6652], 20.00th=[ 7767], 00:15:45.550 | 30.00th=[ 9110], 40.00th=[10421], 50.00th=[11076], 60.00th=[11469], 00:15:45.550 | 70.00th=[11863], 80.00th=[12649], 90.00th=[16057], 95.00th=[19792], 00:15:45.550 | 99.00th=[29492], 99.50th=[30278], 99.90th=[33162], 99.95th=[33162], 00:15:45.550 | 99.99th=[33162] 00:15:45.550 bw ( KiB/s): min=20480, max=23208, per=32.91%, avg=21844.00, stdev=1928.99, samples=2 00:15:45.550 iops : min= 5120, max= 5802, avg=5461.00, stdev=482.25, samples=2 00:15:45.550 lat (usec) : 500=0.01% 00:15:45.550 lat (msec) : 2=0.02%, 4=0.75%, 10=26.59%, 20=67.62%, 50=5.01% 00:15:45.550 cpu : usr=7.19%, sys=12.09%, ctx=424, majf=0, minf=13 00:15:45.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:45.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.550 issued rwts: total=5120,5588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.550 job1: (groupid=0, jobs=1): err= 0: pid=1142251: Mon Jul 15 15:57:12 2024 00:15:45.550 read: IOPS=2828, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1006msec) 00:15:45.550 slat (usec): min=2, max=25827, avg=186.18, stdev=1311.72 00:15:45.550 clat (usec): min=4984, max=86247, avg=23206.88, stdev=15160.44 00:15:45.550 lat (usec): min=4995, max=86259, avg=23393.06, stdev=15289.33 00:15:45.550 clat percentiles (usec): 00:15:45.550 | 1.00th=[ 6194], 5.00th=[10028], 10.00th=[10683], 20.00th=[11731], 00:15:45.550 | 30.00th=[13435], 40.00th=[14877], 50.00th=[16188], 60.00th=[19530], 00:15:45.550 | 70.00th=[23987], 80.00th=[40109], 90.00th=[46924], 95.00th=[55837], 00:15:45.550 | 99.00th=[72877], 99.50th=[72877], 99.90th=[86508], 99.95th=[86508], 00:15:45.550 | 99.99th=[86508] 00:15:45.550 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:15:45.550 slat (usec): min=3, max=23528, avg=146.36, stdev=823.52 00:15:45.550 clat (usec): min=6030, max=78077, avg=19665.07, stdev=12640.61 00:15:45.550 lat (usec): min=6038, max=78095, avg=19811.43, stdev=12715.47 00:15:45.550 clat percentiles (usec): 00:15:45.550 | 1.00th=[ 6587], 5.00th=[ 8356], 10.00th=[10421], 20.00th=[11207], 00:15:45.550 | 30.00th=[11994], 40.00th=[13829], 50.00th=[16712], 60.00th=[20317], 00:15:45.550 | 70.00th=[22938], 80.00th=[23462], 90.00th=[26346], 95.00th=[54264], 00:15:45.550 | 99.00th=[70779], 99.50th=[74974], 99.90th=[77071], 99.95th=[78119], 00:15:45.550 | 99.99th=[78119] 00:15:45.550 bw ( KiB/s): min= 8192, max=16384, per=18.51%, avg=12288.00, stdev=5792.62, samples=2 00:15:45.550 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:15:45.550 lat (msec) : 10=6.59%, 20=53.47%, 50=34.21%, 100=5.73% 00:15:45.550 cpu : usr=2.09%, sys=4.08%, ctx=364, majf=0, minf=15 00:15:45.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:15:45.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.550 issued rwts: total=2845,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.550 job2: (groupid=0, jobs=1): err= 0: pid=1142255: Mon Jul 15 15:57:12 2024 00:15:45.550 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:15:45.550 slat (usec): min=2, max=11721, avg=102.56, stdev=709.57 00:15:45.550 clat (usec): min=4605, max=26287, avg=13679.43, stdev=3420.10 00:15:45.550 lat (usec): min=4615, max=26332, avg=13781.99, stdev=3456.55 00:15:45.550 clat percentiles (usec): 00:15:45.550 | 1.00th=[ 6587], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11338], 00:15:45.550 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12518], 60.00th=[13304], 00:15:45.550 | 70.00th=[14484], 80.00th=[16319], 90.00th=[19268], 95.00th=[20841], 00:15:45.550 | 99.00th=[23725], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:15:45.550 | 99.99th=[26346] 00:15:45.550 write: IOPS=4815, BW=18.8MiB/s (19.7MB/s)(19.1MiB/1013msec); 0 zone resets 00:15:45.550 slat (usec): min=3, max=22052, avg=95.84, stdev=657.05 00:15:45.550 clat (usec): min=980, max=69258, avg=13427.78, stdev=8053.40 00:15:45.550 lat (usec): min=998, max=69264, avg=13523.61, stdev=8093.03 00:15:45.550 clat percentiles (usec): 00:15:45.550 | 1.00th=[ 3490], 5.00th=[ 6194], 10.00th=[ 7308], 20.00th=[ 8356], 00:15:45.550 | 30.00th=[10028], 40.00th=[12256], 50.00th=[12518], 60.00th=[12911], 00:15:45.550 | 70.00th=[13042], 80.00th=[15008], 90.00th=[19792], 95.00th=[26870], 00:15:45.550 | 99.00th=[55837], 99.50th=[55837], 99.90th=[60031], 99.95th=[63701], 00:15:45.550 | 99.99th=[69731] 00:15:45.550 bw ( KiB/s): min=17056, max=20952, per=28.63%, avg=19004.00, stdev=2754.89, samples=2 00:15:45.550 iops : min= 4264, max= 5238, avg=4751.00, stdev=688.72, samples=2 00:15:45.550 lat (usec) : 1000=0.02% 00:15:45.550 lat (msec) : 2=0.21%, 4=0.48%, 10=17.49%, 20=73.18%, 50=7.72% 00:15:45.550 lat (msec) : 100=0.90% 00:15:45.550 cpu : usr=7.11%, sys=8.70%, ctx=472, majf=0, minf=7 00:15:45.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:45.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.550 issued rwts: total=4608,4878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.550 job3: (groupid=0, jobs=1): err= 0: pid=1142259: Mon Jul 15 15:57:12 2024 00:15:45.550 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:15:45.550 slat (usec): min=2, max=37103, avg=154.10, stdev=1171.17 00:15:45.550 clat (usec): min=6782, max=82753, avg=20480.94, stdev=13512.12 00:15:45.550 lat (usec): min=6795, max=82758, avg=20635.04, stdev=13565.82 00:15:45.550 clat percentiles (usec): 00:15:45.550 | 1.00th=[ 8979], 5.00th=[11207], 10.00th=[12649], 20.00th=[13435], 00:15:45.550 | 30.00th=[13829], 40.00th=[14353], 50.00th=[15795], 60.00th=[16712], 00:15:45.550 | 70.00th=[18744], 80.00th=[23987], 90.00th=[34341], 95.00th=[48497], 00:15:45.550 | 99.00th=[82314], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:15:45.550 | 99.99th=[82314] 00:15:45.550 write: IOPS=3241, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1014msec); 0 zone resets 00:15:45.550 slat (usec): min=3, max=18636, avg=150.10, stdev=890.43 00:15:45.550 clat (usec): min=2539, max=48962, avg=19500.52, stdev=8854.58 00:15:45.550 lat (usec): min=2550, max=48975, avg=19650.62, stdev=8906.20 00:15:45.550 clat percentiles (usec): 00:15:45.550 | 1.00th=[ 5211], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[11207], 00:15:45.550 | 30.00th=[13042], 40.00th=[15008], 50.00th=[18220], 60.00th=[22152], 00:15:45.551 | 70.00th=[23462], 80.00th=[25035], 90.00th=[33817], 95.00th=[35914], 00:15:45.551 | 99.00th=[45876], 99.50th=[47973], 99.90th=[49021], 99.95th=[49021], 00:15:45.551 | 99.99th=[49021] 00:15:45.551 bw ( KiB/s): min=12288, max=12992, per=19.04%, avg=12640.00, stdev=497.80, samples=2 00:15:45.551 iops : min= 3072, max= 3248, avg=3160.00, stdev=124.45, samples=2 00:15:45.551 lat (msec) : 4=0.39%, 10=5.80%, 20=56.38%, 50=35.45%, 100=1.98% 00:15:45.551 cpu : usr=4.24%, sys=6.32%, ctx=306, majf=0, minf=15 00:15:45.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:45.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.551 issued rwts: total=3072,3287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.551 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.551 00:15:45.551 Run status group 0 (all jobs): 00:15:45.551 READ: bw=60.3MiB/s (63.2MB/s), 11.0MiB/s-20.0MiB/s (11.6MB/s-20.9MB/s), io=61.1MiB (64.1MB), run=1002-1014msec 00:15:45.551 WRITE: bw=64.8MiB/s (68.0MB/s), 11.9MiB/s-21.8MiB/s (12.5MB/s-22.8MB/s), io=65.7MiB (68.9MB), run=1002-1014msec 00:15:45.551 00:15:45.551 Disk stats (read/write): 00:15:45.551 nvme0n1: ios=4394/4608, merge=0/0, ticks=51744/50897, in_queue=102641, util=98.60% 00:15:45.551 nvme0n2: ios=2580/2650, merge=0/0, ticks=26822/20167, in_queue=46989, util=97.56% 00:15:45.551 nvme0n3: ios=3725/4096, merge=0/0, ticks=48990/49731, in_queue=98721, util=97.81% 00:15:45.551 nvme0n4: ios=2583/2719, merge=0/0, ticks=32800/33735, in_queue=66535, util=98.52% 00:15:45.551 15:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:45.551 [global] 00:15:45.551 thread=1 00:15:45.551 invalidate=1 00:15:45.551 rw=randwrite 00:15:45.551 time_based=1 00:15:45.551 runtime=1 00:15:45.551 ioengine=libaio 00:15:45.551 direct=1 00:15:45.551 bs=4096 00:15:45.551 iodepth=128 00:15:45.551 norandommap=0 00:15:45.551 numjobs=1 00:15:45.551 00:15:45.551 verify_dump=1 00:15:45.551 verify_backlog=512 00:15:45.551 verify_state_save=0 00:15:45.551 do_verify=1 00:15:45.551 verify=crc32c-intel 00:15:45.551 [job0] 00:15:45.551 filename=/dev/nvme0n1 00:15:45.551 [job1] 00:15:45.551 filename=/dev/nvme0n2 00:15:45.551 [job2] 00:15:45.551 filename=/dev/nvme0n3 00:15:45.551 [job3] 00:15:45.551 filename=/dev/nvme0n4 00:15:45.551 Could not set queue depth (nvme0n1) 00:15:45.551 Could not set queue depth (nvme0n2) 00:15:45.551 Could not set queue depth (nvme0n3) 00:15:45.551 Could not set queue depth (nvme0n4) 00:15:45.551 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.551 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.551 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.551 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.551 fio-3.35 00:15:45.551 Starting 4 threads 00:15:46.948 00:15:46.948 job0: (groupid=0, jobs=1): err= 0: pid=1142601: Mon Jul 15 15:57:13 2024 00:15:46.948 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:15:46.948 slat (usec): min=3, max=48267, avg=99.00, stdev=868.98 00:15:46.948 clat (usec): min=6364, max=62884, avg=12953.41, stdev=6083.08 00:15:46.948 lat (usec): min=7291, max=62924, avg=13052.42, stdev=6133.83 00:15:46.948 clat percentiles (usec): 00:15:46.948 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[10814], 00:15:46.948 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:15:46.948 | 70.00th=[12518], 80.00th=[13173], 90.00th=[15008], 95.00th=[16057], 00:15:46.948 | 99.00th=[55837], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:15:46.948 | 99.99th=[62653] 00:15:46.948 write: IOPS=4822, BW=18.8MiB/s (19.8MB/s)(18.9MiB/1002msec); 0 zone resets 00:15:46.948 slat (usec): min=5, max=26208, avg=101.36, stdev=670.94 00:15:46.948 clat (usec): min=246, max=55742, avg=13864.95, stdev=8794.59 00:15:46.948 lat (usec): min=5616, max=55749, avg=13966.30, stdev=8820.87 00:15:46.948 clat percentiles (usec): 00:15:46.948 | 1.00th=[ 6390], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10421], 00:15:46.948 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11469], 60.00th=[11731], 00:15:46.948 | 70.00th=[12125], 80.00th=[13173], 90.00th=[15401], 95.00th=[35390], 00:15:46.948 | 99.00th=[53216], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:15:46.948 | 99.99th=[55837] 00:15:46.948 bw ( KiB/s): min=16384, max=21256, per=29.30%, avg=18820.00, stdev=3445.02, samples=2 00:15:46.948 iops : min= 4096, max= 5314, avg=4705.00, stdev=861.26, samples=2 00:15:46.948 lat (usec) : 250=0.01% 00:15:46.948 lat (msec) : 10=9.61%, 20=84.46%, 50=3.85%, 100=2.08% 00:15:46.948 cpu : usr=7.29%, sys=10.29%, ctx=436, majf=0, minf=1 00:15:46.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:46.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.948 issued rwts: total=4608,4832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.948 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.948 job1: (groupid=0, jobs=1): err= 0: pid=1142608: Mon Jul 15 15:57:13 2024 00:15:46.948 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:15:46.948 slat (usec): min=2, max=16144, avg=227.72, stdev=1222.70 00:15:46.948 clat (usec): min=8258, max=61029, avg=28303.21, stdev=12265.48 00:15:46.948 lat (usec): min=8272, max=61038, avg=28530.93, stdev=12314.43 00:15:46.948 clat percentiles (usec): 00:15:46.948 | 1.00th=[ 9110], 5.00th=[10421], 10.00th=[11600], 20.00th=[13829], 00:15:46.948 | 30.00th=[20579], 40.00th=[24511], 50.00th=[27919], 60.00th=[31851], 00:15:46.948 | 70.00th=[36963], 80.00th=[39060], 90.00th=[43254], 95.00th=[49021], 00:15:46.948 | 99.00th=[53740], 99.50th=[59507], 99.90th=[61080], 99.95th=[61080], 00:15:46.948 | 99.99th=[61080] 00:15:46.948 write: IOPS=2582, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1004msec); 0 zone resets 00:15:46.948 slat (usec): min=3, max=15824, avg=153.99, stdev=870.23 00:15:46.948 clat (usec): min=1294, max=48201, avg=20621.84, stdev=9928.27 00:15:46.948 lat (usec): min=4976, max=48211, avg=20775.83, stdev=9957.48 00:15:46.948 clat percentiles (usec): 00:15:46.948 | 1.00th=[ 5145], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[10814], 00:15:46.948 | 30.00th=[12387], 40.00th=[18220], 50.00th=[18744], 60.00th=[19792], 00:15:46.948 | 70.00th=[23725], 80.00th=[30540], 90.00th=[37487], 95.00th=[39584], 00:15:46.948 | 99.00th=[43254], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:15:46.948 | 99.99th=[47973] 00:15:46.948 bw ( KiB/s): min= 9712, max=10768, per=15.94%, avg=10240.00, stdev=746.70, samples=2 00:15:46.948 iops : min= 2428, max= 2692, avg=2560.00, stdev=186.68, samples=2 00:15:46.948 lat (msec) : 2=0.02%, 10=5.20%, 20=39.86%, 50=52.69%, 100=2.23% 00:15:46.948 cpu : usr=2.49%, sys=3.09%, ctx=306, majf=0, minf=1 00:15:46.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:46.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.948 issued rwts: total=2560,2593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.948 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.948 job2: (groupid=0, jobs=1): err= 0: pid=1142609: Mon Jul 15 15:57:13 2024 00:15:46.948 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:15:46.948 slat (usec): min=3, max=18223, avg=127.24, stdev=906.36 00:15:46.948 clat (usec): min=2259, max=41471, avg=16514.33, stdev=5346.55 00:15:46.948 lat (usec): min=2294, max=41512, avg=16641.57, stdev=5407.74 00:15:46.948 clat percentiles (usec): 00:15:46.948 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[11338], 20.00th=[11731], 00:15:46.948 | 30.00th=[12256], 40.00th=[14353], 50.00th=[15533], 60.00th=[17433], 00:15:46.948 | 70.00th=[19006], 80.00th=[19792], 90.00th=[23725], 95.00th=[25822], 00:15:46.948 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[35914], 00:15:46.948 | 99.99th=[41681] 00:15:46.948 write: IOPS=4544, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1001msec); 0 zone resets 00:15:46.948 slat (usec): min=4, max=13603, avg=93.51, stdev=579.95 00:15:46.948 clat (usec): min=276, max=34187, avg=12972.63, stdev=3935.90 00:15:46.948 lat (usec): min=3280, max=34206, avg=13066.14, stdev=3959.99 00:15:46.948 clat percentiles (usec): 00:15:46.948 | 1.00th=[ 4817], 5.00th=[ 7242], 10.00th=[ 8160], 20.00th=[11076], 00:15:46.948 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:15:46.948 | 70.00th=[13042], 80.00th=[15533], 90.00th=[18220], 95.00th=[22152], 00:15:46.948 | 99.00th=[23462], 99.50th=[25035], 99.90th=[26608], 99.95th=[30016], 00:15:46.948 | 99.99th=[34341] 00:15:46.948 bw ( KiB/s): min=16384, max=16384, per=25.51%, avg=16384.00, stdev= 0.00, samples=1 00:15:46.948 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:15:46.948 lat (usec) : 500=0.01% 00:15:46.948 lat (msec) : 4=0.25%, 10=9.42%, 20=77.80%, 50=12.52% 00:15:46.948 cpu : usr=7.40%, sys=9.60%, ctx=364, majf=0, minf=1 00:15:46.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:46.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.948 issued rwts: total=4096,4549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.948 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.948 job3: (groupid=0, jobs=1): err= 0: pid=1142611: Mon Jul 15 15:57:13 2024 00:15:46.948 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:15:46.948 slat (usec): min=2, max=15109, avg=120.08, stdev=851.30 00:15:46.948 clat (usec): min=3628, max=36036, avg=17312.14, stdev=6216.84 00:15:46.948 lat (usec): min=3642, max=37150, avg=17432.23, stdev=6283.28 00:15:46.948 clat percentiles (usec): 00:15:46.948 | 1.00th=[ 5342], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[11338], 00:15:46.948 | 30.00th=[12911], 40.00th=[14353], 50.00th=[17433], 60.00th=[18482], 00:15:46.948 | 70.00th=[20579], 80.00th=[23462], 90.00th=[26346], 95.00th=[27919], 00:15:46.948 | 99.00th=[30278], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:15:46.948 | 99.99th=[35914] 00:15:46.948 write: IOPS=4141, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1005msec); 0 zone resets 00:15:46.948 slat (usec): min=3, max=22004, avg=95.57, stdev=781.06 00:15:46.948 clat (usec): min=397, max=48548, avg=13610.95, stdev=7215.04 00:15:46.948 lat (usec): min=417, max=48554, avg=13706.51, stdev=7254.33 00:15:46.948 clat percentiles (usec): 00:15:46.948 | 1.00th=[ 1500], 5.00th=[ 4178], 10.00th=[ 5407], 20.00th=[ 7308], 00:15:46.948 | 30.00th=[10028], 40.00th=[11469], 50.00th=[14091], 60.00th=[15139], 00:15:46.948 | 70.00th=[16581], 80.00th=[17433], 90.00th=[21103], 95.00th=[23200], 00:15:46.948 | 99.00th=[38536], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:15:46.948 | 99.99th=[48497] 00:15:46.948 bw ( KiB/s): min=12288, max=20480, per=25.51%, avg=16384.00, stdev=5792.62, samples=2 00:15:46.948 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:15:46.948 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.06% 00:15:46.948 lat (msec) : 2=0.61%, 4=1.91%, 10=17.11%, 20=57.17%, 50=23.07% 00:15:46.948 cpu : usr=4.68%, sys=7.77%, ctx=287, majf=0, minf=1 00:15:46.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:46.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.948 issued rwts: total=4096,4162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.948 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.948 00:15:46.948 Run status group 0 (all jobs): 00:15:46.948 READ: bw=59.7MiB/s (62.6MB/s), 9.96MiB/s-18.0MiB/s (10.4MB/s-18.8MB/s), io=60.0MiB (62.9MB), run=1001-1005msec 00:15:46.948 WRITE: bw=62.7MiB/s (65.8MB/s), 10.1MiB/s-18.8MiB/s (10.6MB/s-19.8MB/s), io=63.0MiB (66.1MB), run=1001-1005msec 00:15:46.948 00:15:46.948 Disk stats (read/write): 00:15:46.948 nvme0n1: ios=3688/4096, merge=0/0, ticks=24655/25441, in_queue=50096, util=99.30% 00:15:46.948 nvme0n2: ios=2069/2463, merge=0/0, ticks=15971/11408, in_queue=27379, util=85.87% 00:15:46.948 nvme0n3: ios=3455/3584, merge=0/0, ticks=43693/29441, in_queue=73134, util=97.80% 00:15:46.948 nvme0n4: ios=3591/3584, merge=0/0, ticks=45397/33491, in_queue=78888, util=97.47% 00:15:46.948 15:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:46.948 15:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1142745 00:15:46.948 15:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:46.948 15:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:46.948 [global] 00:15:46.948 thread=1 00:15:46.948 invalidate=1 00:15:46.948 rw=read 00:15:46.948 time_based=1 00:15:46.948 runtime=10 00:15:46.948 ioengine=libaio 00:15:46.948 direct=1 00:15:46.948 bs=4096 00:15:46.948 iodepth=1 00:15:46.948 norandommap=1 00:15:46.948 numjobs=1 00:15:46.948 00:15:46.948 [job0] 00:15:46.948 filename=/dev/nvme0n1 00:15:46.948 [job1] 00:15:46.948 filename=/dev/nvme0n2 00:15:46.948 [job2] 00:15:46.948 filename=/dev/nvme0n3 00:15:46.948 [job3] 00:15:46.948 filename=/dev/nvme0n4 00:15:46.948 Could not set queue depth (nvme0n1) 00:15:46.948 Could not set queue depth (nvme0n2) 00:15:46.948 Could not set queue depth (nvme0n3) 00:15:46.948 Could not set queue depth (nvme0n4) 00:15:46.948 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.948 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.948 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.949 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.949 fio-3.35 00:15:46.949 Starting 4 threads 00:15:50.234 15:57:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:50.234 15:57:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:50.234 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=29978624, buflen=4096 00:15:50.234 fio: pid=1142840, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:50.234 15:57:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.234 15:57:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:50.234 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=4657152, buflen=4096 00:15:50.234 fio: pid=1142839, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:50.492 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2351104, buflen=4096 00:15:50.492 fio: pid=1142837, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:50.492 15:57:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.492 15:57:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:50.750 15:57:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.750 15:57:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:50.750 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=372736, buflen=4096 00:15:50.750 fio: pid=1142838, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:15:50.750 00:15:50.750 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1142837: Mon Jul 15 15:57:17 2024 00:15:50.750 read: IOPS=167, BW=669KiB/s (685kB/s)(2296KiB/3432msec) 00:15:50.750 slat (usec): min=5, max=15828, avg=40.64, stdev=659.59 00:15:50.750 clat (usec): min=371, max=41921, avg=5893.55, stdev=13828.64 00:15:50.750 lat (usec): min=383, max=41937, avg=5934.24, stdev=13837.42 00:15:50.750 clat percentiles (usec): 00:15:50.750 | 1.00th=[ 375], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 416], 00:15:50.750 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 441], 60.00th=[ 457], 00:15:50.750 | 70.00th=[ 474], 80.00th=[ 494], 90.00th=[41157], 95.00th=[41157], 00:15:50.750 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:15:50.750 | 99.99th=[41681] 00:15:50.750 bw ( KiB/s): min= 96, max= 104, per=1.00%, avg=98.67, stdev= 4.13, samples=6 00:15:50.750 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:15:50.750 lat (usec) : 500=81.57%, 750=4.70% 00:15:50.750 lat (msec) : 10=0.17%, 50=13.39% 00:15:50.750 cpu : usr=0.06%, sys=0.26%, ctx=577, majf=0, minf=1 00:15:50.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.750 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.750 issued rwts: total=575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.750 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1142838: Mon Jul 15 15:57:17 2024 00:15:50.750 read: IOPS=24, BW=98.1KiB/s (100kB/s)(364KiB/3710msec) 00:15:50.750 slat (usec): min=11, max=10843, avg=222.31, stdev=1365.36 00:15:50.750 clat (usec): min=542, max=41149, avg=40530.17, stdev=4238.95 00:15:50.750 lat (usec): min=565, max=51992, avg=40672.29, stdev=4404.93 00:15:50.750 clat percentiles (usec): 00:15:50.750 | 1.00th=[ 545], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:50.750 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:50.750 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:50.750 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:50.750 | 99.99th=[41157] 00:15:50.750 bw ( KiB/s): min= 93, max= 104, per=0.99%, avg=97.86, stdev= 4.34, samples=7 00:15:50.750 iops : min= 23, max= 26, avg=24.43, stdev= 1.13, samples=7 00:15:50.750 lat (usec) : 750=1.09% 00:15:50.750 lat (msec) : 50=97.83% 00:15:50.750 cpu : usr=0.00%, sys=0.30%, ctx=94, majf=0, minf=1 00:15:50.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.750 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.750 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.750 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1142839: Mon Jul 15 15:57:17 2024 00:15:50.750 read: IOPS=358, BW=1435KiB/s (1469kB/s)(4548KiB/3170msec) 00:15:50.750 slat (nsec): min=5608, max=69373, avg=23520.50, stdev=11911.98 00:15:50.750 clat (usec): min=317, max=41514, avg=2737.00, stdev=9419.10 00:15:50.750 lat (usec): min=334, max=41548, avg=2760.53, stdev=9419.63 00:15:50.750 clat percentiles (usec): 00:15:50.750 | 1.00th=[ 326], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 379], 00:15:50.750 | 30.00th=[ 392], 40.00th=[ 404], 50.00th=[ 416], 60.00th=[ 429], 00:15:50.750 | 70.00th=[ 445], 80.00th=[ 486], 90.00th=[ 510], 95.00th=[41157], 00:15:50.750 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:15:50.750 | 99.99th=[41681] 00:15:50.750 bw ( KiB/s): min= 96, max= 6840, per=15.35%, avg=1510.67, stdev=2701.39, samples=6 00:15:50.750 iops : min= 24, max= 1710, avg=377.67, stdev=675.35, samples=6 00:15:50.750 lat (usec) : 500=86.73%, 750=7.47% 00:15:50.750 lat (msec) : 50=5.71% 00:15:50.750 cpu : usr=0.28%, sys=1.04%, ctx=1141, majf=0, minf=1 00:15:50.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.750 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.750 issued rwts: total=1138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.751 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1142840: Mon Jul 15 15:57:17 2024 00:15:50.751 read: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(28.6MiB/2905msec) 00:15:50.751 slat (nsec): min=5296, max=61735, avg=12744.46, stdev=5788.42 00:15:50.751 clat (usec): min=277, max=41326, avg=377.87, stdev=480.87 00:15:50.751 lat (usec): min=284, max=41342, avg=390.61, stdev=481.13 00:15:50.751 clat percentiles (usec): 00:15:50.751 | 1.00th=[ 293], 5.00th=[ 318], 10.00th=[ 343], 20.00th=[ 351], 00:15:50.751 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 379], 00:15:50.751 | 70.00th=[ 383], 80.00th=[ 388], 90.00th=[ 396], 95.00th=[ 408], 00:15:50.751 | 99.00th=[ 515], 99.50th=[ 578], 99.90th=[ 955], 99.95th=[ 1037], 00:15:50.751 | 99.99th=[41157] 00:15:50.751 bw ( KiB/s): min= 9208, max=10752, per=100.00%, avg=10102.40, stdev=607.04, samples=5 00:15:50.751 iops : min= 2302, max= 2688, avg=2525.60, stdev=151.76, samples=5 00:15:50.751 lat (usec) : 500=98.61%, 750=1.12%, 1000=0.18% 00:15:50.751 lat (msec) : 2=0.07%, 50=0.01% 00:15:50.751 cpu : usr=1.72%, sys=5.34%, ctx=7321, majf=0, minf=1 00:15:50.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.751 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.751 issued rwts: total=7320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.751 00:15:50.751 Run status group 0 (all jobs): 00:15:50.751 READ: bw=9834KiB/s (10.1MB/s), 98.1KiB/s-9.84MiB/s (100kB/s-10.3MB/s), io=35.6MiB (37.4MB), run=2905-3710msec 00:15:50.751 00:15:50.751 Disk stats (read/write): 00:15:50.751 nvme0n1: ios=448/0, merge=0/0, ticks=3330/0, in_queue=3330, util=95.51% 00:15:50.751 nvme0n2: ios=88/0, merge=0/0, ticks=3568/0, in_queue=3568, util=96.28% 00:15:50.751 nvme0n3: ios=1189/0, merge=0/0, ticks=3463/0, in_queue=3463, util=100.00% 00:15:50.751 nvme0n4: ios=7227/0, merge=0/0, ticks=2635/0, in_queue=2635, util=96.75% 00:15:51.008 15:57:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:51.008 15:57:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:51.265 15:57:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:51.265 15:57:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:51.521 15:57:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:51.521 15:57:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:51.778 15:57:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:51.778 15:57:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:52.037 15:57:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:52.037 15:57:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1142745 00:15:52.037 15:57:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:52.037 15:57:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:52.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.295 15:57:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:52.295 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:52.295 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:52.295 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:52.295 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:52.295 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:52.295 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:52.295 15:57:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:52.295 15:57:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:52.295 nvmf hotplug test: fio failed as expected 00:15:52.295 15:57:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:52.554 rmmod nvme_tcp 00:15:52.554 rmmod nvme_fabrics 00:15:52.554 rmmod nvme_keyring 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:52.554 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:52.555 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1140127 ']' 00:15:52.555 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1140127 00:15:52.555 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1140127 ']' 00:15:52.555 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1140127 00:15:52.555 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:52.555 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.555 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1140127 00:15:52.555 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:52.555 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:52.555 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1140127' 00:15:52.555 killing process with pid 1140127 00:15:52.555 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1140127 00:15:52.555 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1140127 00:15:52.812 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:52.812 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:52.812 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:52.812 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.812 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:52.812 15:57:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.812 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.812 15:57:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.342 15:57:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:55.342 00:15:55.342 real 0m23.594s 00:15:55.342 user 1m23.069s 00:15:55.342 sys 0m6.137s 00:15:55.342 15:57:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:55.342 15:57:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.342 ************************************ 00:15:55.342 END TEST nvmf_fio_target 00:15:55.342 ************************************ 00:15:55.342 15:57:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:55.342 15:57:21 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:55.342 15:57:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:55.342 15:57:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.343 15:57:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 ************************************ 00:15:55.343 START TEST nvmf_bdevio 00:15:55.343 ************************************ 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:55.343 * Looking for test storage... 00:15:55.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:55.343 15:57:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:57.242 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:57.242 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:57.242 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:57.242 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:57.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:15:57.242 00:15:57.242 --- 10.0.0.2 ping statistics --- 00:15:57.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.242 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:15:57.242 00:15:57.242 --- 10.0.0.1 ping statistics --- 00:15:57.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.242 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1145456 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1145456 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1145456 ']' 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.242 15:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.243 15:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.243 15:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.243 15:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.243 [2024-07-15 15:57:23.966513] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:57.243 [2024-07-15 15:57:23.966607] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.243 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.243 [2024-07-15 15:57:24.043024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.243 [2024-07-15 15:57:24.165837] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.243 [2024-07-15 15:57:24.165908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.243 [2024-07-15 15:57:24.165924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.243 [2024-07-15 15:57:24.165935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.243 [2024-07-15 15:57:24.165958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.243 [2024-07-15 15:57:24.166031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:57.243 [2024-07-15 15:57:24.166092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:57.243 [2024-07-15 15:57:24.166158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:57.243 [2024-07-15 15:57:24.166161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.501 [2024-07-15 15:57:24.335765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.501 Malloc0 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.501 [2024-07-15 15:57:24.386852] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:57.501 15:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:57.501 { 00:15:57.501 "params": { 00:15:57.501 "name": "Nvme$subsystem", 00:15:57.501 "trtype": "$TEST_TRANSPORT", 00:15:57.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:57.501 "adrfam": "ipv4", 00:15:57.501 "trsvcid": "$NVMF_PORT", 00:15:57.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:57.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:57.501 "hdgst": ${hdgst:-false}, 00:15:57.502 "ddgst": ${ddgst:-false} 00:15:57.502 }, 00:15:57.502 "method": "bdev_nvme_attach_controller" 00:15:57.502 } 00:15:57.502 EOF 00:15:57.502 )") 00:15:57.502 15:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:57.502 15:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:57.502 15:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:57.502 15:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:57.502 "params": { 00:15:57.502 "name": "Nvme1", 00:15:57.502 "trtype": "tcp", 00:15:57.502 "traddr": "10.0.0.2", 00:15:57.502 "adrfam": "ipv4", 00:15:57.502 "trsvcid": "4420", 00:15:57.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:57.502 "hdgst": false, 00:15:57.502 "ddgst": false 00:15:57.502 }, 00:15:57.502 "method": "bdev_nvme_attach_controller" 00:15:57.502 }' 00:15:57.759 [2024-07-15 15:57:24.433444] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:57.759 [2024-07-15 15:57:24.433546] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145484 ] 00:15:57.759 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.759 [2024-07-15 15:57:24.497986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:57.759 [2024-07-15 15:57:24.611102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.759 [2024-07-15 15:57:24.611151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.759 [2024-07-15 15:57:24.611155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.325 I/O targets: 00:15:58.325 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:58.325 00:15:58.325 00:15:58.325 CUnit - A unit testing framework for C - Version 2.1-3 00:15:58.326 http://cunit.sourceforge.net/ 00:15:58.326 00:15:58.326 00:15:58.326 Suite: bdevio tests on: Nvme1n1 00:15:58.326 Test: blockdev write read block ...passed 00:15:58.326 Test: blockdev write zeroes read block ...passed 00:15:58.326 Test: blockdev write zeroes read no split ...passed 00:15:58.326 Test: blockdev write zeroes read split ...passed 00:15:58.326 Test: blockdev write zeroes read split partial ...passed 00:15:58.326 Test: blockdev reset ...[2024-07-15 15:57:25.120319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:58.326 [2024-07-15 15:57:25.120432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d5580 (9): Bad file descriptor 00:15:58.326 [2024-07-15 15:57:25.179381] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:58.326 passed 00:15:58.326 Test: blockdev write read 8 blocks ...passed 00:15:58.326 Test: blockdev write read size > 128k ...passed 00:15:58.326 Test: blockdev write read invalid size ...passed 00:15:58.326 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:58.326 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:58.326 Test: blockdev write read max offset ...passed 00:15:58.583 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:58.583 Test: blockdev writev readv 8 blocks ...passed 00:15:58.583 Test: blockdev writev readv 30 x 1block ...passed 00:15:58.583 Test: blockdev writev readv block ...passed 00:15:58.583 Test: blockdev writev readv size > 128k ...passed 00:15:58.583 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:58.583 Test: blockdev comparev and writev ...[2024-07-15 15:57:25.439123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.583 [2024-07-15 15:57:25.439159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:58.583 [2024-07-15 15:57:25.439183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.583 [2024-07-15 15:57:25.439200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:58.583 [2024-07-15 15:57:25.439583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.583 [2024-07-15 15:57:25.439606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:58.583 [2024-07-15 15:57:25.439628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.583 [2024-07-15 15:57:25.439644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:58.583 [2024-07-15 15:57:25.440014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.583 [2024-07-15 15:57:25.440038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:58.583 [2024-07-15 15:57:25.440060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.583 [2024-07-15 15:57:25.440076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:58.583 [2024-07-15 15:57:25.440457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.583 [2024-07-15 15:57:25.440482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:58.583 [2024-07-15 15:57:25.440503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.583 [2024-07-15 15:57:25.440521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:58.583 passed 00:15:58.842 Test: blockdev nvme passthru rw ...passed 00:15:58.842 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:57:25.524226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.842 [2024-07-15 15:57:25.524254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:58.842 [2024-07-15 15:57:25.524441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.842 [2024-07-15 15:57:25.524472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:58.842 [2024-07-15 15:57:25.524658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.842 [2024-07-15 15:57:25.524681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:58.842 [2024-07-15 15:57:25.524861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.842 [2024-07-15 15:57:25.524892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:58.842 passed 00:15:58.842 Test: blockdev nvme admin passthru ...passed 00:15:58.842 Test: blockdev copy ...passed 00:15:58.842 00:15:58.842 Run Summary: Type Total Ran Passed Failed Inactive 00:15:58.842 suites 1 1 n/a 0 0 00:15:58.842 tests 23 23 23 0 0 00:15:58.842 asserts 152 152 152 0 n/a 00:15:58.842 00:15:58.842 Elapsed time = 1.265 seconds 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:59.100 rmmod nvme_tcp 00:15:59.100 rmmod nvme_fabrics 00:15:59.100 rmmod nvme_keyring 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1145456 ']' 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1145456 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1145456 ']' 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1145456 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1145456 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1145456' 00:15:59.100 killing process with pid 1145456 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1145456 00:15:59.100 15:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1145456 00:15:59.359 15:57:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:59.359 15:57:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:59.359 15:57:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:59.359 15:57:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.359 15:57:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.359 15:57:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.359 15:57:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.359 15:57:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.295 15:57:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.295 00:16:01.295 real 0m6.461s 00:16:01.295 user 0m11.305s 00:16:01.295 sys 0m2.010s 00:16:01.295 15:57:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.295 15:57:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:01.295 ************************************ 00:16:01.295 END TEST nvmf_bdevio 00:16:01.295 ************************************ 00:16:01.555 15:57:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:01.555 15:57:28 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:01.555 15:57:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:01.555 15:57:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.555 15:57:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.555 ************************************ 00:16:01.555 START TEST nvmf_auth_target 00:16:01.555 ************************************ 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:01.555 * Looking for test storage... 00:16:01.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.555 15:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:03.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:03.457 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:03.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:03.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:03.457 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:03.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:16:03.717 00:16:03.717 --- 10.0.0.2 ping statistics --- 00:16:03.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.717 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:16:03.717 00:16:03.717 --- 10.0.0.1 ping statistics --- 00:16:03.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.717 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1147672 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1147672 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1147672 ']' 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:03.717 15:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1147823 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9daf0f4adb8dfc82b6fe02d04cd4230533facc4bfd691b89 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.z2e 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9daf0f4adb8dfc82b6fe02d04cd4230533facc4bfd691b89 0 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9daf0f4adb8dfc82b6fe02d04cd4230533facc4bfd691b89 0 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9daf0f4adb8dfc82b6fe02d04cd4230533facc4bfd691b89 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.z2e 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.z2e 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.z2e 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ada5517f6291c91dde87f7dad57bbe808c11af509dbf66970e30884e4af2060e 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qRf 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ada5517f6291c91dde87f7dad57bbe808c11af509dbf66970e30884e4af2060e 3 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ada5517f6291c91dde87f7dad57bbe808c11af509dbf66970e30884e4af2060e 3 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ada5517f6291c91dde87f7dad57bbe808c11af509dbf66970e30884e4af2060e 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qRf 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qRf 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.qRf 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=48bf35eec19233c6f4b43238d60e3b38 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SIv 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 48bf35eec19233c6f4b43238d60e3b38 1 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 48bf35eec19233c6f4b43238d60e3b38 1 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=48bf35eec19233c6f4b43238d60e3b38 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:04.649 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:04.906 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SIv 00:16:04.906 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SIv 00:16:04.906 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.SIv 00:16:04.906 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:04.906 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:04.906 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.906 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:04.906 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1b9b52c0f70d385158558c9cca9af8b9fbc5e19b79c1711f 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KmP 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1b9b52c0f70d385158558c9cca9af8b9fbc5e19b79c1711f 2 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1b9b52c0f70d385158558c9cca9af8b9fbc5e19b79c1711f 2 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1b9b52c0f70d385158558c9cca9af8b9fbc5e19b79c1711f 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KmP 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KmP 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.KmP 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=01ef10d748ca5ea74996b4a1dda16543d7e57d0c4c672270 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.X9A 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 01ef10d748ca5ea74996b4a1dda16543d7e57d0c4c672270 2 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 01ef10d748ca5ea74996b4a1dda16543d7e57d0c4c672270 2 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=01ef10d748ca5ea74996b4a1dda16543d7e57d0c4c672270 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.X9A 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.X9A 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.X9A 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f6c48a530d0d36734dec0f64f1b9b502 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2ur 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f6c48a530d0d36734dec0f64f1b9b502 1 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f6c48a530d0d36734dec0f64f1b9b502 1 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f6c48a530d0d36734dec0f64f1b9b502 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2ur 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2ur 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.2ur 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c62d0d9e0b06261da8ce1c4d5bc2dfee5a94e2c192038853a5f3498037568c4c 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.jD6 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c62d0d9e0b06261da8ce1c4d5bc2dfee5a94e2c192038853a5f3498037568c4c 3 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c62d0d9e0b06261da8ce1c4d5bc2dfee5a94e2c192038853a5f3498037568c4c 3 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c62d0d9e0b06261da8ce1c4d5bc2dfee5a94e2c192038853a5f3498037568c4c 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.jD6 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.jD6 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.jD6 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1147672 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1147672 ']' 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.907 15:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.165 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.165 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:05.165 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1147823 /var/tmp/host.sock 00:16:05.165 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1147823 ']' 00:16:05.165 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:05.165 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.165 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:05.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:05.165 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.165 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.z2e 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.z2e 00:16:05.423 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.z2e 00:16:05.681 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.qRf ]] 00:16:05.681 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qRf 00:16:05.681 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.681 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.939 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.939 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qRf 00:16:05.939 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qRf 00:16:05.939 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:05.939 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.SIv 00:16:05.939 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.939 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.939 15:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.939 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.SIv 00:16:05.939 15:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.SIv 00:16:06.197 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.KmP ]] 00:16:06.197 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KmP 00:16:06.197 15:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.197 15:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.197 15:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.197 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KmP 00:16:06.197 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KmP 00:16:06.455 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:06.455 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.X9A 00:16:06.455 15:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.455 15:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.455 15:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.455 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.X9A 00:16:06.455 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.X9A 00:16:06.713 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.2ur ]] 00:16:06.713 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2ur 00:16:06.713 15:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.713 15:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.713 15:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.713 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2ur 00:16:06.713 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2ur 00:16:06.971 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:06.971 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jD6 00:16:06.971 15:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.971 15:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.971 15:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.971 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.jD6 00:16:06.971 15:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.jD6 00:16:07.229 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:07.229 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:07.229 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.229 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.229 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.229 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.487 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:07.487 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.487 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.487 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:07.487 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:07.487 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.487 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.487 15:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.487 15:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.487 15:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.487 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.487 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.051 00:16:08.051 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.051 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.051 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.051 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.051 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.051 15:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.051 15:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.051 15:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.051 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.051 { 00:16:08.051 "cntlid": 1, 00:16:08.051 "qid": 0, 00:16:08.051 "state": "enabled", 00:16:08.051 "thread": "nvmf_tgt_poll_group_000", 00:16:08.051 "listen_address": { 00:16:08.051 "trtype": "TCP", 00:16:08.051 "adrfam": "IPv4", 00:16:08.051 "traddr": "10.0.0.2", 00:16:08.051 "trsvcid": "4420" 00:16:08.051 }, 00:16:08.051 "peer_address": { 00:16:08.051 "trtype": "TCP", 00:16:08.051 "adrfam": "IPv4", 00:16:08.051 "traddr": "10.0.0.1", 00:16:08.051 "trsvcid": "60150" 00:16:08.051 }, 00:16:08.051 "auth": { 00:16:08.051 "state": "completed", 00:16:08.051 "digest": "sha256", 00:16:08.051 "dhgroup": "null" 00:16:08.051 } 00:16:08.051 } 00:16:08.051 ]' 00:16:08.051 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.308 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.308 15:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.308 15:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:08.308 15:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.308 15:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.308 15:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.308 15:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.566 15:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:16:09.499 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.499 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.499 15:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.499 15:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.499 15:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.499 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.499 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.499 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.756 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:09.756 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.756 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.756 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:09.756 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:09.756 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.756 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.756 15:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.756 15:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.756 15:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.757 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.757 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.015 00:16:10.015 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.015 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.015 15:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.273 15:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.273 15:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.273 15:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.273 15:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.273 15:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.273 15:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.273 { 00:16:10.273 "cntlid": 3, 00:16:10.273 "qid": 0, 00:16:10.273 "state": "enabled", 00:16:10.273 "thread": "nvmf_tgt_poll_group_000", 00:16:10.273 "listen_address": { 00:16:10.273 "trtype": "TCP", 00:16:10.273 "adrfam": "IPv4", 00:16:10.273 "traddr": "10.0.0.2", 00:16:10.273 "trsvcid": "4420" 00:16:10.273 }, 00:16:10.273 "peer_address": { 00:16:10.273 "trtype": "TCP", 00:16:10.273 "adrfam": "IPv4", 00:16:10.273 "traddr": "10.0.0.1", 00:16:10.273 "trsvcid": "60164" 00:16:10.273 }, 00:16:10.273 "auth": { 00:16:10.273 "state": "completed", 00:16:10.273 "digest": "sha256", 00:16:10.273 "dhgroup": "null" 00:16:10.273 } 00:16:10.273 } 00:16:10.273 ]' 00:16:10.273 15:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.273 15:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.273 15:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.273 15:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:10.273 15:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.531 15:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.531 15:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.531 15:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.788 15:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:16:11.720 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.720 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.720 15:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.720 15:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.720 15:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.720 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.720 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.720 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.978 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:11.978 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.978 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.978 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:11.978 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:11.978 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.978 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.978 15:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.978 15:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.978 15:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.978 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.978 15:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.248 00:16:12.248 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.248 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.248 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.505 { 00:16:12.505 "cntlid": 5, 00:16:12.505 "qid": 0, 00:16:12.505 "state": "enabled", 00:16:12.505 "thread": "nvmf_tgt_poll_group_000", 00:16:12.505 "listen_address": { 00:16:12.505 "trtype": "TCP", 00:16:12.505 "adrfam": "IPv4", 00:16:12.505 "traddr": "10.0.0.2", 00:16:12.505 "trsvcid": "4420" 00:16:12.505 }, 00:16:12.505 "peer_address": { 00:16:12.505 "trtype": "TCP", 00:16:12.505 "adrfam": "IPv4", 00:16:12.505 "traddr": "10.0.0.1", 00:16:12.505 "trsvcid": "60200" 00:16:12.505 }, 00:16:12.505 "auth": { 00:16:12.505 "state": "completed", 00:16:12.505 "digest": "sha256", 00:16:12.505 "dhgroup": "null" 00:16:12.505 } 00:16:12.505 } 00:16:12.505 ]' 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.505 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.762 15:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:16:13.694 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.694 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.694 15:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.694 15:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.694 15:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.694 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.694 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.694 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:14.293 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:14.293 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.293 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.293 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:14.293 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:14.293 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.293 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:14.293 15:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.293 15:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.293 15:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.293 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.293 15:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.551 00:16:14.551 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.551 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.551 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.808 { 00:16:14.808 "cntlid": 7, 00:16:14.808 "qid": 0, 00:16:14.808 "state": "enabled", 00:16:14.808 "thread": "nvmf_tgt_poll_group_000", 00:16:14.808 "listen_address": { 00:16:14.808 "trtype": "TCP", 00:16:14.808 "adrfam": "IPv4", 00:16:14.808 "traddr": "10.0.0.2", 00:16:14.808 "trsvcid": "4420" 00:16:14.808 }, 00:16:14.808 "peer_address": { 00:16:14.808 "trtype": "TCP", 00:16:14.808 "adrfam": "IPv4", 00:16:14.808 "traddr": "10.0.0.1", 00:16:14.808 "trsvcid": "35874" 00:16:14.808 }, 00:16:14.808 "auth": { 00:16:14.808 "state": "completed", 00:16:14.808 "digest": "sha256", 00:16:14.808 "dhgroup": "null" 00:16:14.808 } 00:16:14.808 } 00:16:14.808 ]' 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.808 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.065 15:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:16:15.996 15:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.996 15:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.996 15:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.996 15:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.996 15:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.996 15:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.996 15:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.996 15:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.996 15:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:16.254 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:16.254 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.254 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.254 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:16.254 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:16.254 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.254 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.254 15:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.254 15:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.254 15:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.254 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.254 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.821 00:16:16.821 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.821 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.821 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.077 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.077 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.077 15:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.077 15:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.077 15:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.077 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.077 { 00:16:17.077 "cntlid": 9, 00:16:17.077 "qid": 0, 00:16:17.077 "state": "enabled", 00:16:17.077 "thread": "nvmf_tgt_poll_group_000", 00:16:17.077 "listen_address": { 00:16:17.077 "trtype": "TCP", 00:16:17.077 "adrfam": "IPv4", 00:16:17.077 "traddr": "10.0.0.2", 00:16:17.077 "trsvcid": "4420" 00:16:17.077 }, 00:16:17.077 "peer_address": { 00:16:17.077 "trtype": "TCP", 00:16:17.077 "adrfam": "IPv4", 00:16:17.077 "traddr": "10.0.0.1", 00:16:17.077 "trsvcid": "35898" 00:16:17.077 }, 00:16:17.077 "auth": { 00:16:17.078 "state": "completed", 00:16:17.078 "digest": "sha256", 00:16:17.078 "dhgroup": "ffdhe2048" 00:16:17.078 } 00:16:17.078 } 00:16:17.078 ]' 00:16:17.078 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.078 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.078 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.078 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.078 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.078 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.078 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.078 15:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.335 15:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:16:18.267 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.267 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.267 15:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.267 15:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.267 15:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.267 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.267 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.267 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.525 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:18.525 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.525 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.525 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:18.525 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.525 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.525 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.525 15:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.525 15:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.525 15:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.525 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.525 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.784 00:16:18.784 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.784 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.784 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.042 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.042 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.042 15:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.042 15:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.299 15:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.299 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.299 { 00:16:19.299 "cntlid": 11, 00:16:19.299 "qid": 0, 00:16:19.299 "state": "enabled", 00:16:19.299 "thread": "nvmf_tgt_poll_group_000", 00:16:19.299 "listen_address": { 00:16:19.299 "trtype": "TCP", 00:16:19.299 "adrfam": "IPv4", 00:16:19.299 "traddr": "10.0.0.2", 00:16:19.299 "trsvcid": "4420" 00:16:19.299 }, 00:16:19.299 "peer_address": { 00:16:19.299 "trtype": "TCP", 00:16:19.299 "adrfam": "IPv4", 00:16:19.299 "traddr": "10.0.0.1", 00:16:19.299 "trsvcid": "35930" 00:16:19.299 }, 00:16:19.299 "auth": { 00:16:19.299 "state": "completed", 00:16:19.299 "digest": "sha256", 00:16:19.299 "dhgroup": "ffdhe2048" 00:16:19.299 } 00:16:19.299 } 00:16:19.299 ]' 00:16:19.299 15:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.299 15:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.299 15:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.299 15:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.299 15:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.299 15:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.299 15:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.299 15:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.557 15:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:16:20.488 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.488 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.489 15:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.489 15:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.489 15:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.489 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.489 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.489 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.746 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:20.746 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.746 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.746 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:20.746 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:20.746 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.746 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.746 15:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.746 15:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.746 15:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.746 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.746 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.311 00:16:21.311 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.311 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.311 15:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.311 15:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.311 15:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.311 15:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.311 15:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.568 15:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.568 15:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.568 { 00:16:21.568 "cntlid": 13, 00:16:21.568 "qid": 0, 00:16:21.568 "state": "enabled", 00:16:21.568 "thread": "nvmf_tgt_poll_group_000", 00:16:21.568 "listen_address": { 00:16:21.568 "trtype": "TCP", 00:16:21.568 "adrfam": "IPv4", 00:16:21.568 "traddr": "10.0.0.2", 00:16:21.568 "trsvcid": "4420" 00:16:21.568 }, 00:16:21.568 "peer_address": { 00:16:21.568 "trtype": "TCP", 00:16:21.568 "adrfam": "IPv4", 00:16:21.568 "traddr": "10.0.0.1", 00:16:21.568 "trsvcid": "35946" 00:16:21.568 }, 00:16:21.568 "auth": { 00:16:21.568 "state": "completed", 00:16:21.568 "digest": "sha256", 00:16:21.568 "dhgroup": "ffdhe2048" 00:16:21.568 } 00:16:21.568 } 00:16:21.568 ]' 00:16:21.568 15:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.568 15:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.568 15:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.568 15:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.568 15:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.568 15:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.568 15:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.568 15:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.825 15:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:16:22.756 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.756 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.756 15:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.757 15:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.757 15:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.757 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.757 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.757 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.014 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:23.014 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.014 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:23.014 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:23.014 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:23.014 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.014 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:23.014 15:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.014 15:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.014 15:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.014 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.014 15:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.272 00:16:23.272 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.272 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.272 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.529 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.529 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.529 15:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.529 15:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.792 15:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.792 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.792 { 00:16:23.792 "cntlid": 15, 00:16:23.792 "qid": 0, 00:16:23.792 "state": "enabled", 00:16:23.792 "thread": "nvmf_tgt_poll_group_000", 00:16:23.792 "listen_address": { 00:16:23.792 "trtype": "TCP", 00:16:23.792 "adrfam": "IPv4", 00:16:23.792 "traddr": "10.0.0.2", 00:16:23.792 "trsvcid": "4420" 00:16:23.792 }, 00:16:23.792 "peer_address": { 00:16:23.792 "trtype": "TCP", 00:16:23.792 "adrfam": "IPv4", 00:16:23.792 "traddr": "10.0.0.1", 00:16:23.792 "trsvcid": "35978" 00:16:23.792 }, 00:16:23.792 "auth": { 00:16:23.792 "state": "completed", 00:16:23.792 "digest": "sha256", 00:16:23.792 "dhgroup": "ffdhe2048" 00:16:23.792 } 00:16:23.792 } 00:16:23.792 ]' 00:16:23.792 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.792 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.792 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.792 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.792 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.792 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.792 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.792 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.050 15:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:16:24.984 15:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.984 15:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.984 15:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.984 15:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.984 15:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.984 15:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.984 15:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.984 15:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.984 15:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.242 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:25.242 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.242 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.242 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:25.242 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:25.242 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.242 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.242 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.242 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.242 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.242 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.242 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.808 00:16:25.808 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.808 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.808 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.066 { 00:16:26.066 "cntlid": 17, 00:16:26.066 "qid": 0, 00:16:26.066 "state": "enabled", 00:16:26.066 "thread": "nvmf_tgt_poll_group_000", 00:16:26.066 "listen_address": { 00:16:26.066 "trtype": "TCP", 00:16:26.066 "adrfam": "IPv4", 00:16:26.066 "traddr": "10.0.0.2", 00:16:26.066 "trsvcid": "4420" 00:16:26.066 }, 00:16:26.066 "peer_address": { 00:16:26.066 "trtype": "TCP", 00:16:26.066 "adrfam": "IPv4", 00:16:26.066 "traddr": "10.0.0.1", 00:16:26.066 "trsvcid": "41812" 00:16:26.066 }, 00:16:26.066 "auth": { 00:16:26.066 "state": "completed", 00:16:26.066 "digest": "sha256", 00:16:26.066 "dhgroup": "ffdhe3072" 00:16:26.066 } 00:16:26.066 } 00:16:26.066 ]' 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.066 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.324 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:16:27.255 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.256 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.256 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.256 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.256 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.256 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.256 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.256 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.514 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:27.514 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.514 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.514 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:27.514 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:27.514 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.514 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.514 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.514 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.514 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.514 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.514 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.775 00:16:28.070 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.070 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.070 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.070 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.070 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.070 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.070 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.329 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.329 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.329 { 00:16:28.329 "cntlid": 19, 00:16:28.329 "qid": 0, 00:16:28.329 "state": "enabled", 00:16:28.329 "thread": "nvmf_tgt_poll_group_000", 00:16:28.329 "listen_address": { 00:16:28.329 "trtype": "TCP", 00:16:28.329 "adrfam": "IPv4", 00:16:28.329 "traddr": "10.0.0.2", 00:16:28.329 "trsvcid": "4420" 00:16:28.329 }, 00:16:28.329 "peer_address": { 00:16:28.329 "trtype": "TCP", 00:16:28.329 "adrfam": "IPv4", 00:16:28.329 "traddr": "10.0.0.1", 00:16:28.329 "trsvcid": "41838" 00:16:28.329 }, 00:16:28.329 "auth": { 00:16:28.329 "state": "completed", 00:16:28.329 "digest": "sha256", 00:16:28.329 "dhgroup": "ffdhe3072" 00:16:28.329 } 00:16:28.329 } 00:16:28.329 ]' 00:16:28.329 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.329 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.329 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.329 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.329 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.329 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.329 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.329 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.586 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:16:29.519 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.519 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.519 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.519 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.519 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.519 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.519 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:29.519 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:29.777 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:29.777 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.777 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.777 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:29.777 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:29.777 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.777 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.777 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.777 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.777 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.777 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.777 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.035 00:16:30.035 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.035 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.035 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.292 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.292 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.292 15:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.292 15:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.292 15:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.292 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.292 { 00:16:30.292 "cntlid": 21, 00:16:30.292 "qid": 0, 00:16:30.292 "state": "enabled", 00:16:30.292 "thread": "nvmf_tgt_poll_group_000", 00:16:30.292 "listen_address": { 00:16:30.292 "trtype": "TCP", 00:16:30.292 "adrfam": "IPv4", 00:16:30.292 "traddr": "10.0.0.2", 00:16:30.292 "trsvcid": "4420" 00:16:30.292 }, 00:16:30.292 "peer_address": { 00:16:30.292 "trtype": "TCP", 00:16:30.292 "adrfam": "IPv4", 00:16:30.292 "traddr": "10.0.0.1", 00:16:30.292 "trsvcid": "41876" 00:16:30.292 }, 00:16:30.292 "auth": { 00:16:30.292 "state": "completed", 00:16:30.292 "digest": "sha256", 00:16:30.292 "dhgroup": "ffdhe3072" 00:16:30.292 } 00:16:30.292 } 00:16:30.292 ]' 00:16:30.292 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.550 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.550 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.550 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.550 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.550 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.550 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.550 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.808 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:16:31.742 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.742 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.742 15:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.742 15:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.742 15:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.742 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.742 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.742 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.999 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:31.999 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.999 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.999 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:31.999 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:31.999 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.999 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:31.999 15:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.999 15:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.999 15:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.999 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.999 15:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.257 00:16:32.257 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.257 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.257 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.514 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.514 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.514 15:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.514 15:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.514 15:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.514 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.514 { 00:16:32.514 "cntlid": 23, 00:16:32.514 "qid": 0, 00:16:32.514 "state": "enabled", 00:16:32.514 "thread": "nvmf_tgt_poll_group_000", 00:16:32.514 "listen_address": { 00:16:32.514 "trtype": "TCP", 00:16:32.514 "adrfam": "IPv4", 00:16:32.514 "traddr": "10.0.0.2", 00:16:32.514 "trsvcid": "4420" 00:16:32.514 }, 00:16:32.514 "peer_address": { 00:16:32.514 "trtype": "TCP", 00:16:32.514 "adrfam": "IPv4", 00:16:32.514 "traddr": "10.0.0.1", 00:16:32.514 "trsvcid": "41900" 00:16:32.514 }, 00:16:32.514 "auth": { 00:16:32.514 "state": "completed", 00:16:32.514 "digest": "sha256", 00:16:32.514 "dhgroup": "ffdhe3072" 00:16:32.514 } 00:16:32.514 } 00:16:32.514 ]' 00:16:32.514 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.770 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.770 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.770 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.770 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.770 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.770 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.770 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.026 15:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:16:33.959 15:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.959 15:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.959 15:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.959 15:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.959 15:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.959 15:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.959 15:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.959 15:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.959 15:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.215 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:34.215 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.215 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.215 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:34.215 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:34.215 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.215 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.215 15:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.215 15:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.215 15:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.215 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.215 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.777 00:16:34.777 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.777 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.777 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.777 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.777 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.777 15:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.777 15:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.777 15:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.777 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.777 { 00:16:34.777 "cntlid": 25, 00:16:34.777 "qid": 0, 00:16:34.777 "state": "enabled", 00:16:34.777 "thread": "nvmf_tgt_poll_group_000", 00:16:34.777 "listen_address": { 00:16:34.777 "trtype": "TCP", 00:16:34.777 "adrfam": "IPv4", 00:16:34.777 "traddr": "10.0.0.2", 00:16:34.777 "trsvcid": "4420" 00:16:34.777 }, 00:16:34.777 "peer_address": { 00:16:34.777 "trtype": "TCP", 00:16:34.777 "adrfam": "IPv4", 00:16:34.778 "traddr": "10.0.0.1", 00:16:34.778 "trsvcid": "34434" 00:16:34.778 }, 00:16:34.778 "auth": { 00:16:34.778 "state": "completed", 00:16:34.778 "digest": "sha256", 00:16:34.778 "dhgroup": "ffdhe4096" 00:16:34.778 } 00:16:34.778 } 00:16:34.778 ]' 00:16:34.778 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.034 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.034 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.034 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.034 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.034 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.034 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.034 15:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.291 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:16:36.221 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.221 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.221 15:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.221 15:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.221 15:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.221 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.221 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.221 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.478 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:36.478 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.478 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:36.478 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:36.478 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:36.478 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.478 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.478 15:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.478 15:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.478 15:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.478 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.478 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.041 00:16:37.041 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.041 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.041 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.298 { 00:16:37.298 "cntlid": 27, 00:16:37.298 "qid": 0, 00:16:37.298 "state": "enabled", 00:16:37.298 "thread": "nvmf_tgt_poll_group_000", 00:16:37.298 "listen_address": { 00:16:37.298 "trtype": "TCP", 00:16:37.298 "adrfam": "IPv4", 00:16:37.298 "traddr": "10.0.0.2", 00:16:37.298 "trsvcid": "4420" 00:16:37.298 }, 00:16:37.298 "peer_address": { 00:16:37.298 "trtype": "TCP", 00:16:37.298 "adrfam": "IPv4", 00:16:37.298 "traddr": "10.0.0.1", 00:16:37.298 "trsvcid": "34466" 00:16:37.298 }, 00:16:37.298 "auth": { 00:16:37.298 "state": "completed", 00:16:37.298 "digest": "sha256", 00:16:37.298 "dhgroup": "ffdhe4096" 00:16:37.298 } 00:16:37.298 } 00:16:37.298 ]' 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.298 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.555 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:16:38.486 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.487 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.487 15:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.487 15:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.487 15:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.487 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.487 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.487 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.744 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:38.744 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.744 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.744 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:38.744 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:38.744 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.744 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.744 15:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.744 15:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.744 15:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.744 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.744 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.309 00:16:39.309 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.309 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.309 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.567 { 00:16:39.567 "cntlid": 29, 00:16:39.567 "qid": 0, 00:16:39.567 "state": "enabled", 00:16:39.567 "thread": "nvmf_tgt_poll_group_000", 00:16:39.567 "listen_address": { 00:16:39.567 "trtype": "TCP", 00:16:39.567 "adrfam": "IPv4", 00:16:39.567 "traddr": "10.0.0.2", 00:16:39.567 "trsvcid": "4420" 00:16:39.567 }, 00:16:39.567 "peer_address": { 00:16:39.567 "trtype": "TCP", 00:16:39.567 "adrfam": "IPv4", 00:16:39.567 "traddr": "10.0.0.1", 00:16:39.567 "trsvcid": "34498" 00:16:39.567 }, 00:16:39.567 "auth": { 00:16:39.567 "state": "completed", 00:16:39.567 "digest": "sha256", 00:16:39.567 "dhgroup": "ffdhe4096" 00:16:39.567 } 00:16:39.567 } 00:16:39.567 ]' 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.567 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.824 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:16:40.757 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.757 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.757 15:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.757 15:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.757 15:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.757 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.757 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.757 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.014 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:41.014 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.014 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.014 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:41.014 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:41.014 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.014 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:41.014 15:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.014 15:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.014 15:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.014 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.014 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.589 00:16:41.590 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.590 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.590 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.892 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.892 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.892 15:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.892 15:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.892 15:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.892 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.892 { 00:16:41.892 "cntlid": 31, 00:16:41.892 "qid": 0, 00:16:41.892 "state": "enabled", 00:16:41.892 "thread": "nvmf_tgt_poll_group_000", 00:16:41.892 "listen_address": { 00:16:41.892 "trtype": "TCP", 00:16:41.892 "adrfam": "IPv4", 00:16:41.892 "traddr": "10.0.0.2", 00:16:41.892 "trsvcid": "4420" 00:16:41.892 }, 00:16:41.892 "peer_address": { 00:16:41.892 "trtype": "TCP", 00:16:41.892 "adrfam": "IPv4", 00:16:41.892 "traddr": "10.0.0.1", 00:16:41.892 "trsvcid": "34522" 00:16:41.892 }, 00:16:41.892 "auth": { 00:16:41.892 "state": "completed", 00:16:41.892 "digest": "sha256", 00:16:41.892 "dhgroup": "ffdhe4096" 00:16:41.892 } 00:16:41.892 } 00:16:41.892 ]' 00:16:41.892 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.892 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.892 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.892 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.892 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.893 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.893 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.893 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.151 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:16:43.085 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.085 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.085 15:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.085 15:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.085 15:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.085 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.085 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.085 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.085 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.343 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:43.343 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.343 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.343 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:43.343 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:43.343 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.343 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.343 15:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.344 15:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.344 15:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.344 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.344 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.909 00:16:43.909 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.909 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.909 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.172 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.172 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.172 15:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.172 15:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.172 15:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.172 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.172 { 00:16:44.172 "cntlid": 33, 00:16:44.172 "qid": 0, 00:16:44.172 "state": "enabled", 00:16:44.172 "thread": "nvmf_tgt_poll_group_000", 00:16:44.172 "listen_address": { 00:16:44.172 "trtype": "TCP", 00:16:44.172 "adrfam": "IPv4", 00:16:44.172 "traddr": "10.0.0.2", 00:16:44.172 "trsvcid": "4420" 00:16:44.172 }, 00:16:44.172 "peer_address": { 00:16:44.172 "trtype": "TCP", 00:16:44.172 "adrfam": "IPv4", 00:16:44.172 "traddr": "10.0.0.1", 00:16:44.172 "trsvcid": "34556" 00:16:44.172 }, 00:16:44.172 "auth": { 00:16:44.172 "state": "completed", 00:16:44.172 "digest": "sha256", 00:16:44.172 "dhgroup": "ffdhe6144" 00:16:44.172 } 00:16:44.172 } 00:16:44.172 ]' 00:16:44.172 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.172 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.172 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.430 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.430 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.430 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.430 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.430 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.687 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:16:45.620 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.620 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.620 15:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.620 15:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.620 15:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.620 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.620 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.620 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.878 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:45.878 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.878 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:45.878 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:45.878 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:45.878 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.878 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.878 15:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.878 15:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.878 15:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.878 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.878 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.443 00:16:46.443 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.443 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.443 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.700 { 00:16:46.700 "cntlid": 35, 00:16:46.700 "qid": 0, 00:16:46.700 "state": "enabled", 00:16:46.700 "thread": "nvmf_tgt_poll_group_000", 00:16:46.700 "listen_address": { 00:16:46.700 "trtype": "TCP", 00:16:46.700 "adrfam": "IPv4", 00:16:46.700 "traddr": "10.0.0.2", 00:16:46.700 "trsvcid": "4420" 00:16:46.700 }, 00:16:46.700 "peer_address": { 00:16:46.700 "trtype": "TCP", 00:16:46.700 "adrfam": "IPv4", 00:16:46.700 "traddr": "10.0.0.1", 00:16:46.700 "trsvcid": "53674" 00:16:46.700 }, 00:16:46.700 "auth": { 00:16:46.700 "state": "completed", 00:16:46.700 "digest": "sha256", 00:16:46.700 "dhgroup": "ffdhe6144" 00:16:46.700 } 00:16:46.700 } 00:16:46.700 ]' 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.700 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.264 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:16:48.196 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.196 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.196 15:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.196 15:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.196 15:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.196 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.196 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.196 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.453 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:48.454 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.454 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:48.454 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:48.454 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:48.454 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.454 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.454 15:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.454 15:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.454 15:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.454 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.454 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.019 00:16:49.019 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.019 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.019 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.276 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.276 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.276 15:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.276 15:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.276 15:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.276 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.276 { 00:16:49.276 "cntlid": 37, 00:16:49.276 "qid": 0, 00:16:49.276 "state": "enabled", 00:16:49.276 "thread": "nvmf_tgt_poll_group_000", 00:16:49.276 "listen_address": { 00:16:49.276 "trtype": "TCP", 00:16:49.276 "adrfam": "IPv4", 00:16:49.276 "traddr": "10.0.0.2", 00:16:49.276 "trsvcid": "4420" 00:16:49.276 }, 00:16:49.276 "peer_address": { 00:16:49.276 "trtype": "TCP", 00:16:49.276 "adrfam": "IPv4", 00:16:49.276 "traddr": "10.0.0.1", 00:16:49.276 "trsvcid": "53714" 00:16:49.276 }, 00:16:49.276 "auth": { 00:16:49.276 "state": "completed", 00:16:49.276 "digest": "sha256", 00:16:49.276 "dhgroup": "ffdhe6144" 00:16:49.276 } 00:16:49.276 } 00:16:49.276 ]' 00:16:49.276 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.276 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.276 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.276 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.276 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.276 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.276 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.276 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.534 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:16:50.467 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.467 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.467 15:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.467 15:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.467 15:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.467 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.467 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.467 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.725 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:50.725 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.725 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.725 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:50.725 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:50.725 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.725 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:50.725 15:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.725 15:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.725 15:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.725 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.725 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.290 00:16:51.290 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.290 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.290 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.548 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.548 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.548 15:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.548 15:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.548 15:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.548 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.548 { 00:16:51.548 "cntlid": 39, 00:16:51.548 "qid": 0, 00:16:51.548 "state": "enabled", 00:16:51.548 "thread": "nvmf_tgt_poll_group_000", 00:16:51.548 "listen_address": { 00:16:51.548 "trtype": "TCP", 00:16:51.548 "adrfam": "IPv4", 00:16:51.548 "traddr": "10.0.0.2", 00:16:51.548 "trsvcid": "4420" 00:16:51.548 }, 00:16:51.548 "peer_address": { 00:16:51.548 "trtype": "TCP", 00:16:51.548 "adrfam": "IPv4", 00:16:51.548 "traddr": "10.0.0.1", 00:16:51.548 "trsvcid": "53734" 00:16:51.548 }, 00:16:51.548 "auth": { 00:16:51.548 "state": "completed", 00:16:51.548 "digest": "sha256", 00:16:51.548 "dhgroup": "ffdhe6144" 00:16:51.548 } 00:16:51.548 } 00:16:51.548 ]' 00:16:51.548 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.548 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.548 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.805 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.805 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.805 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.805 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.805 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.063 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:16:52.996 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.996 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.996 15:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.996 15:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.996 15:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.996 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.996 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.996 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.996 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:53.253 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:53.253 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.253 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.253 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:53.253 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:53.253 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.253 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.253 15:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.253 15:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.253 15:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.253 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.253 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.185 00:16:54.185 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.185 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.185 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.443 { 00:16:54.443 "cntlid": 41, 00:16:54.443 "qid": 0, 00:16:54.443 "state": "enabled", 00:16:54.443 "thread": "nvmf_tgt_poll_group_000", 00:16:54.443 "listen_address": { 00:16:54.443 "trtype": "TCP", 00:16:54.443 "adrfam": "IPv4", 00:16:54.443 "traddr": "10.0.0.2", 00:16:54.443 "trsvcid": "4420" 00:16:54.443 }, 00:16:54.443 "peer_address": { 00:16:54.443 "trtype": "TCP", 00:16:54.443 "adrfam": "IPv4", 00:16:54.443 "traddr": "10.0.0.1", 00:16:54.443 "trsvcid": "53768" 00:16:54.443 }, 00:16:54.443 "auth": { 00:16:54.443 "state": "completed", 00:16:54.443 "digest": "sha256", 00:16:54.443 "dhgroup": "ffdhe8192" 00:16:54.443 } 00:16:54.443 } 00:16:54.443 ]' 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.443 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.701 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:16:55.643 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.643 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.643 15:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.643 15:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.643 15:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.643 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.643 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.643 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.934 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:55.934 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.934 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.934 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:55.934 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:55.934 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.934 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.934 15:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.934 15:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.934 15:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.934 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.935 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.869 00:16:56.869 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.869 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.869 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.126 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.126 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.126 15:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.126 15:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.126 15:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.126 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.126 { 00:16:57.126 "cntlid": 43, 00:16:57.126 "qid": 0, 00:16:57.126 "state": "enabled", 00:16:57.126 "thread": "nvmf_tgt_poll_group_000", 00:16:57.126 "listen_address": { 00:16:57.126 "trtype": "TCP", 00:16:57.126 "adrfam": "IPv4", 00:16:57.126 "traddr": "10.0.0.2", 00:16:57.126 "trsvcid": "4420" 00:16:57.126 }, 00:16:57.126 "peer_address": { 00:16:57.126 "trtype": "TCP", 00:16:57.126 "adrfam": "IPv4", 00:16:57.126 "traddr": "10.0.0.1", 00:16:57.126 "trsvcid": "42980" 00:16:57.126 }, 00:16:57.126 "auth": { 00:16:57.126 "state": "completed", 00:16:57.126 "digest": "sha256", 00:16:57.126 "dhgroup": "ffdhe8192" 00:16:57.126 } 00:16:57.126 } 00:16:57.126 ]' 00:16:57.126 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.126 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.126 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.126 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.126 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.126 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.126 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.126 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.383 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:16:58.314 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.314 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.314 15:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.314 15:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.572 15:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.572 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.572 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.572 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.829 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:58.829 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.829 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.829 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:58.829 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:58.829 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.829 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.829 15:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.829 15:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.829 15:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.829 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.829 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.761 00:16:59.761 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.761 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.761 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.761 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.761 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.761 15:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.761 15:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.761 15:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.761 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.761 { 00:16:59.761 "cntlid": 45, 00:16:59.761 "qid": 0, 00:16:59.761 "state": "enabled", 00:16:59.761 "thread": "nvmf_tgt_poll_group_000", 00:16:59.761 "listen_address": { 00:16:59.761 "trtype": "TCP", 00:16:59.761 "adrfam": "IPv4", 00:16:59.761 "traddr": "10.0.0.2", 00:16:59.761 "trsvcid": "4420" 00:16:59.761 }, 00:16:59.761 "peer_address": { 00:16:59.761 "trtype": "TCP", 00:16:59.761 "adrfam": "IPv4", 00:16:59.761 "traddr": "10.0.0.1", 00:16:59.761 "trsvcid": "43018" 00:16:59.761 }, 00:16:59.761 "auth": { 00:16:59.761 "state": "completed", 00:16:59.761 "digest": "sha256", 00:16:59.761 "dhgroup": "ffdhe8192" 00:16:59.761 } 00:16:59.761 } 00:16:59.761 ]' 00:16:59.761 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.019 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.019 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.019 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.019 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.019 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.019 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.019 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.277 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:17:01.210 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.210 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.210 15:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.210 15:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.210 15:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.210 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.210 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.210 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.468 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:01.468 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.468 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.468 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:01.468 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:01.468 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.468 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:01.468 15:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.468 15:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.468 15:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.468 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.468 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.401 00:17:02.401 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.401 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.401 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.659 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.659 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.659 15:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.659 15:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.659 15:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.659 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.659 { 00:17:02.659 "cntlid": 47, 00:17:02.659 "qid": 0, 00:17:02.659 "state": "enabled", 00:17:02.659 "thread": "nvmf_tgt_poll_group_000", 00:17:02.659 "listen_address": { 00:17:02.659 "trtype": "TCP", 00:17:02.659 "adrfam": "IPv4", 00:17:02.659 "traddr": "10.0.0.2", 00:17:02.659 "trsvcid": "4420" 00:17:02.659 }, 00:17:02.659 "peer_address": { 00:17:02.659 "trtype": "TCP", 00:17:02.659 "adrfam": "IPv4", 00:17:02.659 "traddr": "10.0.0.1", 00:17:02.659 "trsvcid": "43052" 00:17:02.659 }, 00:17:02.659 "auth": { 00:17:02.659 "state": "completed", 00:17:02.659 "digest": "sha256", 00:17:02.659 "dhgroup": "ffdhe8192" 00:17:02.659 } 00:17:02.659 } 00:17:02.659 ]' 00:17:02.659 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.659 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.659 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.918 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.918 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.918 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.918 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.918 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.176 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:17:04.109 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.109 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.109 15:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.109 15:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.109 15:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.109 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:04.109 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.109 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.109 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.109 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.367 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:04.367 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.367 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:04.367 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:04.367 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:04.367 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.367 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.367 15:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.367 15:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.367 15:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.367 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.367 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.625 00:17:04.625 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.625 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.625 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.883 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.883 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.883 15:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.883 15:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.883 15:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.883 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.883 { 00:17:04.883 "cntlid": 49, 00:17:04.883 "qid": 0, 00:17:04.883 "state": "enabled", 00:17:04.883 "thread": "nvmf_tgt_poll_group_000", 00:17:04.883 "listen_address": { 00:17:04.883 "trtype": "TCP", 00:17:04.883 "adrfam": "IPv4", 00:17:04.883 "traddr": "10.0.0.2", 00:17:04.883 "trsvcid": "4420" 00:17:04.883 }, 00:17:04.883 "peer_address": { 00:17:04.883 "trtype": "TCP", 00:17:04.883 "adrfam": "IPv4", 00:17:04.883 "traddr": "10.0.0.1", 00:17:04.883 "trsvcid": "56608" 00:17:04.883 }, 00:17:04.883 "auth": { 00:17:04.883 "state": "completed", 00:17:04.883 "digest": "sha384", 00:17:04.883 "dhgroup": "null" 00:17:04.883 } 00:17:04.883 } 00:17:04.883 ]' 00:17:04.883 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.883 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.883 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.141 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:05.141 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.141 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.141 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.141 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.399 15:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:17:06.331 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.331 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.331 15:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.331 15:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.331 15:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.331 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.331 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.331 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.589 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:06.589 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.589 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.589 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:06.589 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:06.589 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.589 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.589 15:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.589 15:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.589 15:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.589 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.589 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.846 00:17:06.846 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.846 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.846 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.103 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.104 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.104 15:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.104 15:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.104 15:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.104 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.104 { 00:17:07.104 "cntlid": 51, 00:17:07.104 "qid": 0, 00:17:07.104 "state": "enabled", 00:17:07.104 "thread": "nvmf_tgt_poll_group_000", 00:17:07.104 "listen_address": { 00:17:07.104 "trtype": "TCP", 00:17:07.104 "adrfam": "IPv4", 00:17:07.104 "traddr": "10.0.0.2", 00:17:07.104 "trsvcid": "4420" 00:17:07.104 }, 00:17:07.104 "peer_address": { 00:17:07.104 "trtype": "TCP", 00:17:07.104 "adrfam": "IPv4", 00:17:07.104 "traddr": "10.0.0.1", 00:17:07.104 "trsvcid": "56630" 00:17:07.104 }, 00:17:07.104 "auth": { 00:17:07.104 "state": "completed", 00:17:07.104 "digest": "sha384", 00:17:07.104 "dhgroup": "null" 00:17:07.104 } 00:17:07.104 } 00:17:07.104 ]' 00:17:07.104 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.361 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.361 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.361 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:07.361 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.361 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.361 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.361 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.619 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:17:08.552 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.552 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.552 15:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.552 15:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.552 15:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.552 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.552 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.552 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.809 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:08.809 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.809 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.809 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:08.809 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:08.809 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.809 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.809 15:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.809 15:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.809 15:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.809 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.809 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.066 00:17:09.066 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.066 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.067 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.324 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.324 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.324 15:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.324 15:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.324 15:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.324 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.324 { 00:17:09.324 "cntlid": 53, 00:17:09.324 "qid": 0, 00:17:09.324 "state": "enabled", 00:17:09.324 "thread": "nvmf_tgt_poll_group_000", 00:17:09.324 "listen_address": { 00:17:09.324 "trtype": "TCP", 00:17:09.324 "adrfam": "IPv4", 00:17:09.324 "traddr": "10.0.0.2", 00:17:09.324 "trsvcid": "4420" 00:17:09.324 }, 00:17:09.324 "peer_address": { 00:17:09.324 "trtype": "TCP", 00:17:09.324 "adrfam": "IPv4", 00:17:09.324 "traddr": "10.0.0.1", 00:17:09.324 "trsvcid": "56658" 00:17:09.324 }, 00:17:09.324 "auth": { 00:17:09.324 "state": "completed", 00:17:09.324 "digest": "sha384", 00:17:09.324 "dhgroup": "null" 00:17:09.324 } 00:17:09.324 } 00:17:09.324 ]' 00:17:09.324 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.583 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.583 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.583 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:09.583 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.583 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.583 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.583 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.841 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:17:10.774 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.775 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.775 15:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.775 15:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.775 15:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.775 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.775 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.775 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.032 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:11.032 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.032 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.032 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:11.032 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:11.032 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.032 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:11.033 15:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.033 15:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.033 15:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.033 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.033 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.290 00:17:11.547 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.547 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.547 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.805 { 00:17:11.805 "cntlid": 55, 00:17:11.805 "qid": 0, 00:17:11.805 "state": "enabled", 00:17:11.805 "thread": "nvmf_tgt_poll_group_000", 00:17:11.805 "listen_address": { 00:17:11.805 "trtype": "TCP", 00:17:11.805 "adrfam": "IPv4", 00:17:11.805 "traddr": "10.0.0.2", 00:17:11.805 "trsvcid": "4420" 00:17:11.805 }, 00:17:11.805 "peer_address": { 00:17:11.805 "trtype": "TCP", 00:17:11.805 "adrfam": "IPv4", 00:17:11.805 "traddr": "10.0.0.1", 00:17:11.805 "trsvcid": "56690" 00:17:11.805 }, 00:17:11.805 "auth": { 00:17:11.805 "state": "completed", 00:17:11.805 "digest": "sha384", 00:17:11.805 "dhgroup": "null" 00:17:11.805 } 00:17:11.805 } 00:17:11.805 ]' 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.805 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.062 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:17:12.994 15:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.994 15:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.994 15:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.994 15:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.994 15:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.994 15:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.994 15:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.994 15:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.994 15:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.252 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:13.252 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.252 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.252 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:13.252 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.252 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.252 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.252 15:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.252 15:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.252 15:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.252 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.252 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.816 00:17:13.816 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.816 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.816 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.816 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.816 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.816 15:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.816 15:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.816 15:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.816 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.816 { 00:17:13.816 "cntlid": 57, 00:17:13.816 "qid": 0, 00:17:13.816 "state": "enabled", 00:17:13.816 "thread": "nvmf_tgt_poll_group_000", 00:17:13.816 "listen_address": { 00:17:13.816 "trtype": "TCP", 00:17:13.816 "adrfam": "IPv4", 00:17:13.816 "traddr": "10.0.0.2", 00:17:13.816 "trsvcid": "4420" 00:17:13.816 }, 00:17:13.816 "peer_address": { 00:17:13.816 "trtype": "TCP", 00:17:13.816 "adrfam": "IPv4", 00:17:13.816 "traddr": "10.0.0.1", 00:17:13.816 "trsvcid": "56718" 00:17:13.816 }, 00:17:13.816 "auth": { 00:17:13.816 "state": "completed", 00:17:13.816 "digest": "sha384", 00:17:13.816 "dhgroup": "ffdhe2048" 00:17:13.816 } 00:17:13.816 } 00:17:13.816 ]' 00:17:13.816 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.073 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.073 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.073 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.073 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.073 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.073 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.073 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.330 15:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:17:15.261 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.261 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.261 15:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.261 15:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.261 15:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.261 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.261 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.261 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.518 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:15.518 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.518 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.518 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:15.518 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.518 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.518 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.518 15:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.518 15:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.518 15:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.518 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.518 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.079 00:17:16.079 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.079 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.079 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.334 { 00:17:16.334 "cntlid": 59, 00:17:16.334 "qid": 0, 00:17:16.334 "state": "enabled", 00:17:16.334 "thread": "nvmf_tgt_poll_group_000", 00:17:16.334 "listen_address": { 00:17:16.334 "trtype": "TCP", 00:17:16.334 "adrfam": "IPv4", 00:17:16.334 "traddr": "10.0.0.2", 00:17:16.334 "trsvcid": "4420" 00:17:16.334 }, 00:17:16.334 "peer_address": { 00:17:16.334 "trtype": "TCP", 00:17:16.334 "adrfam": "IPv4", 00:17:16.334 "traddr": "10.0.0.1", 00:17:16.334 "trsvcid": "44004" 00:17:16.334 }, 00:17:16.334 "auth": { 00:17:16.334 "state": "completed", 00:17:16.334 "digest": "sha384", 00:17:16.334 "dhgroup": "ffdhe2048" 00:17:16.334 } 00:17:16.334 } 00:17:16.334 ]' 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.334 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.590 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:17:17.519 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.519 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.519 15:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.519 15:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.519 15:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.519 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.519 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.519 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.775 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:17.775 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.775 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.775 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:17.775 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:17.775 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.775 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.775 15:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.775 15:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.775 15:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.775 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.776 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.340 00:17:18.340 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.340 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.340 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.340 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.340 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.340 15:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.340 15:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.340 15:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.340 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.340 { 00:17:18.340 "cntlid": 61, 00:17:18.340 "qid": 0, 00:17:18.340 "state": "enabled", 00:17:18.340 "thread": "nvmf_tgt_poll_group_000", 00:17:18.340 "listen_address": { 00:17:18.340 "trtype": "TCP", 00:17:18.340 "adrfam": "IPv4", 00:17:18.340 "traddr": "10.0.0.2", 00:17:18.340 "trsvcid": "4420" 00:17:18.340 }, 00:17:18.340 "peer_address": { 00:17:18.340 "trtype": "TCP", 00:17:18.340 "adrfam": "IPv4", 00:17:18.340 "traddr": "10.0.0.1", 00:17:18.340 "trsvcid": "44032" 00:17:18.340 }, 00:17:18.340 "auth": { 00:17:18.340 "state": "completed", 00:17:18.340 "digest": "sha384", 00:17:18.340 "dhgroup": "ffdhe2048" 00:17:18.340 } 00:17:18.340 } 00:17:18.340 ]' 00:17:18.340 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.649 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.649 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.649 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.649 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.649 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.649 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.649 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.905 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:17:19.832 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.832 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.832 15:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.832 15:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.832 15:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.832 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.832 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.832 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.088 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:20.088 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.088 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.088 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:20.088 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:20.088 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.088 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:20.088 15:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.088 15:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.088 15:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.088 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.088 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.650 00:17:20.650 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.650 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.650 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.650 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.650 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.650 15:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.650 15:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.650 15:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.650 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.650 { 00:17:20.650 "cntlid": 63, 00:17:20.650 "qid": 0, 00:17:20.650 "state": "enabled", 00:17:20.650 "thread": "nvmf_tgt_poll_group_000", 00:17:20.650 "listen_address": { 00:17:20.650 "trtype": "TCP", 00:17:20.650 "adrfam": "IPv4", 00:17:20.650 "traddr": "10.0.0.2", 00:17:20.650 "trsvcid": "4420" 00:17:20.650 }, 00:17:20.650 "peer_address": { 00:17:20.650 "trtype": "TCP", 00:17:20.650 "adrfam": "IPv4", 00:17:20.650 "traddr": "10.0.0.1", 00:17:20.650 "trsvcid": "44056" 00:17:20.650 }, 00:17:20.650 "auth": { 00:17:20.650 "state": "completed", 00:17:20.650 "digest": "sha384", 00:17:20.650 "dhgroup": "ffdhe2048" 00:17:20.650 } 00:17:20.650 } 00:17:20.650 ]' 00:17:20.650 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.906 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.906 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.906 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.906 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.906 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.906 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.906 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.162 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:17:22.091 15:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.091 15:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.091 15:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.091 15:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.091 15:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.091 15:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.091 15:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.091 15:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:22.091 15:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:22.347 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:22.347 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.347 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.347 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:22.347 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:22.347 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.347 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.347 15:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.347 15:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.347 15:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.347 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.347 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.910 00:17:22.910 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.910 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.910 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.166 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.166 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.166 15:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.166 15:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.166 15:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.166 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.166 { 00:17:23.166 "cntlid": 65, 00:17:23.166 "qid": 0, 00:17:23.166 "state": "enabled", 00:17:23.166 "thread": "nvmf_tgt_poll_group_000", 00:17:23.166 "listen_address": { 00:17:23.166 "trtype": "TCP", 00:17:23.166 "adrfam": "IPv4", 00:17:23.166 "traddr": "10.0.0.2", 00:17:23.166 "trsvcid": "4420" 00:17:23.166 }, 00:17:23.166 "peer_address": { 00:17:23.166 "trtype": "TCP", 00:17:23.166 "adrfam": "IPv4", 00:17:23.166 "traddr": "10.0.0.1", 00:17:23.166 "trsvcid": "44090" 00:17:23.166 }, 00:17:23.166 "auth": { 00:17:23.166 "state": "completed", 00:17:23.166 "digest": "sha384", 00:17:23.166 "dhgroup": "ffdhe3072" 00:17:23.166 } 00:17:23.166 } 00:17:23.166 ]' 00:17:23.167 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.167 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.167 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.167 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:23.167 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.167 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.167 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.167 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.456 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:17:24.432 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.432 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.432 15:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.432 15:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.432 15:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.432 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.432 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.432 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.690 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:24.690 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.690 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.690 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:24.690 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:24.691 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.691 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.691 15:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.691 15:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.691 15:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.691 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.691 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.259 00:17:25.259 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.259 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.259 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.517 { 00:17:25.517 "cntlid": 67, 00:17:25.517 "qid": 0, 00:17:25.517 "state": "enabled", 00:17:25.517 "thread": "nvmf_tgt_poll_group_000", 00:17:25.517 "listen_address": { 00:17:25.517 "trtype": "TCP", 00:17:25.517 "adrfam": "IPv4", 00:17:25.517 "traddr": "10.0.0.2", 00:17:25.517 "trsvcid": "4420" 00:17:25.517 }, 00:17:25.517 "peer_address": { 00:17:25.517 "trtype": "TCP", 00:17:25.517 "adrfam": "IPv4", 00:17:25.517 "traddr": "10.0.0.1", 00:17:25.517 "trsvcid": "49788" 00:17:25.517 }, 00:17:25.517 "auth": { 00:17:25.517 "state": "completed", 00:17:25.517 "digest": "sha384", 00:17:25.517 "dhgroup": "ffdhe3072" 00:17:25.517 } 00:17:25.517 } 00:17:25.517 ]' 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.517 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.775 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:17:26.710 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.710 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:26.710 15:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.710 15:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.710 15:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.710 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.710 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.710 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.968 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:26.968 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.968 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.968 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:26.968 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.968 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.968 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.968 15:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.968 15:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.968 15:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.968 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.968 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.533 00:17:27.533 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.533 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.533 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.791 { 00:17:27.791 "cntlid": 69, 00:17:27.791 "qid": 0, 00:17:27.791 "state": "enabled", 00:17:27.791 "thread": "nvmf_tgt_poll_group_000", 00:17:27.791 "listen_address": { 00:17:27.791 "trtype": "TCP", 00:17:27.791 "adrfam": "IPv4", 00:17:27.791 "traddr": "10.0.0.2", 00:17:27.791 "trsvcid": "4420" 00:17:27.791 }, 00:17:27.791 "peer_address": { 00:17:27.791 "trtype": "TCP", 00:17:27.791 "adrfam": "IPv4", 00:17:27.791 "traddr": "10.0.0.1", 00:17:27.791 "trsvcid": "49804" 00:17:27.791 }, 00:17:27.791 "auth": { 00:17:27.791 "state": "completed", 00:17:27.791 "digest": "sha384", 00:17:27.791 "dhgroup": "ffdhe3072" 00:17:27.791 } 00:17:27.791 } 00:17:27.791 ]' 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.791 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.051 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:17:28.987 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.987 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.987 15:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.987 15:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.987 15:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.987 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.987 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.987 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.553 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:29.553 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.553 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.553 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:29.553 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.553 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.553 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:29.553 15:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.553 15:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.553 15:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.553 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.553 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.812 00:17:29.812 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.812 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.812 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.069 { 00:17:30.069 "cntlid": 71, 00:17:30.069 "qid": 0, 00:17:30.069 "state": "enabled", 00:17:30.069 "thread": "nvmf_tgt_poll_group_000", 00:17:30.069 "listen_address": { 00:17:30.069 "trtype": "TCP", 00:17:30.069 "adrfam": "IPv4", 00:17:30.069 "traddr": "10.0.0.2", 00:17:30.069 "trsvcid": "4420" 00:17:30.069 }, 00:17:30.069 "peer_address": { 00:17:30.069 "trtype": "TCP", 00:17:30.069 "adrfam": "IPv4", 00:17:30.069 "traddr": "10.0.0.1", 00:17:30.069 "trsvcid": "49834" 00:17:30.069 }, 00:17:30.069 "auth": { 00:17:30.069 "state": "completed", 00:17:30.069 "digest": "sha384", 00:17:30.069 "dhgroup": "ffdhe3072" 00:17:30.069 } 00:17:30.069 } 00:17:30.069 ]' 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.069 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.634 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:17:31.568 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.568 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.568 15:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.568 15:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.568 15:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.568 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.568 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.568 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.568 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.825 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:31.825 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.825 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.825 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:31.825 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.825 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.825 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.825 15:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.825 15:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.825 15:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.825 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.825 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.082 00:17:32.082 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.082 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.082 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.339 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.339 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.339 15:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.339 15:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.339 15:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.339 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.339 { 00:17:32.339 "cntlid": 73, 00:17:32.339 "qid": 0, 00:17:32.339 "state": "enabled", 00:17:32.339 "thread": "nvmf_tgt_poll_group_000", 00:17:32.339 "listen_address": { 00:17:32.339 "trtype": "TCP", 00:17:32.339 "adrfam": "IPv4", 00:17:32.339 "traddr": "10.0.0.2", 00:17:32.339 "trsvcid": "4420" 00:17:32.339 }, 00:17:32.339 "peer_address": { 00:17:32.339 "trtype": "TCP", 00:17:32.339 "adrfam": "IPv4", 00:17:32.339 "traddr": "10.0.0.1", 00:17:32.339 "trsvcid": "49852" 00:17:32.339 }, 00:17:32.339 "auth": { 00:17:32.339 "state": "completed", 00:17:32.339 "digest": "sha384", 00:17:32.339 "dhgroup": "ffdhe4096" 00:17:32.339 } 00:17:32.339 } 00:17:32.339 ]' 00:17:32.339 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.339 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.339 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.339 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:32.339 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.595 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.595 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.595 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.851 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:17:33.789 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.789 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:33.789 15:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.789 15:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.789 15:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.789 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.789 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:33.789 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.048 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:34.048 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.048 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.048 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:34.048 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:34.048 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.048 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.048 15:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.048 15:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.048 15:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.048 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.048 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.336 00:17:34.336 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.336 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.336 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.594 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.594 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.594 15:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.594 15:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.594 15:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.594 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.594 { 00:17:34.594 "cntlid": 75, 00:17:34.594 "qid": 0, 00:17:34.594 "state": "enabled", 00:17:34.594 "thread": "nvmf_tgt_poll_group_000", 00:17:34.594 "listen_address": { 00:17:34.594 "trtype": "TCP", 00:17:34.594 "adrfam": "IPv4", 00:17:34.594 "traddr": "10.0.0.2", 00:17:34.594 "trsvcid": "4420" 00:17:34.594 }, 00:17:34.594 "peer_address": { 00:17:34.594 "trtype": "TCP", 00:17:34.594 "adrfam": "IPv4", 00:17:34.594 "traddr": "10.0.0.1", 00:17:34.594 "trsvcid": "53274" 00:17:34.594 }, 00:17:34.594 "auth": { 00:17:34.594 "state": "completed", 00:17:34.594 "digest": "sha384", 00:17:34.594 "dhgroup": "ffdhe4096" 00:17:34.594 } 00:17:34.594 } 00:17:34.594 ]' 00:17:34.594 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.852 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.852 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.852 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.852 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.852 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.852 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.852 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.110 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:17:36.046 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.046 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.046 15:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.046 15:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.046 15:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.046 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.046 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.046 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.304 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:36.304 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.304 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:36.304 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:36.304 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:36.304 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.304 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.304 15:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.304 15:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.304 15:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.304 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.304 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.868 00:17:36.868 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.868 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.868 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.127 { 00:17:37.127 "cntlid": 77, 00:17:37.127 "qid": 0, 00:17:37.127 "state": "enabled", 00:17:37.127 "thread": "nvmf_tgt_poll_group_000", 00:17:37.127 "listen_address": { 00:17:37.127 "trtype": "TCP", 00:17:37.127 "adrfam": "IPv4", 00:17:37.127 "traddr": "10.0.0.2", 00:17:37.127 "trsvcid": "4420" 00:17:37.127 }, 00:17:37.127 "peer_address": { 00:17:37.127 "trtype": "TCP", 00:17:37.127 "adrfam": "IPv4", 00:17:37.127 "traddr": "10.0.0.1", 00:17:37.127 "trsvcid": "53288" 00:17:37.127 }, 00:17:37.127 "auth": { 00:17:37.127 "state": "completed", 00:17:37.127 "digest": "sha384", 00:17:37.127 "dhgroup": "ffdhe4096" 00:17:37.127 } 00:17:37.127 } 00:17:37.127 ]' 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.127 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.389 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:17:38.372 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.372 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.372 15:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.372 15:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.372 15:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.372 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.372 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.372 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.630 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:38.631 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.631 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.631 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:38.631 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:38.631 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.631 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:38.631 15:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.631 15:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.631 15:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.631 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.631 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.196 00:17:39.196 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.196 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.196 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.196 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.454 { 00:17:39.454 "cntlid": 79, 00:17:39.454 "qid": 0, 00:17:39.454 "state": "enabled", 00:17:39.454 "thread": "nvmf_tgt_poll_group_000", 00:17:39.454 "listen_address": { 00:17:39.454 "trtype": "TCP", 00:17:39.454 "adrfam": "IPv4", 00:17:39.454 "traddr": "10.0.0.2", 00:17:39.454 "trsvcid": "4420" 00:17:39.454 }, 00:17:39.454 "peer_address": { 00:17:39.454 "trtype": "TCP", 00:17:39.454 "adrfam": "IPv4", 00:17:39.454 "traddr": "10.0.0.1", 00:17:39.454 "trsvcid": "53318" 00:17:39.454 }, 00:17:39.454 "auth": { 00:17:39.454 "state": "completed", 00:17:39.454 "digest": "sha384", 00:17:39.454 "dhgroup": "ffdhe4096" 00:17:39.454 } 00:17:39.454 } 00:17:39.454 ]' 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.454 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.710 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:17:40.643 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.643 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:40.643 15:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.643 15:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.643 15:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.643 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.643 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.643 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.643 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.912 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:40.912 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.912 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.912 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:40.912 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:40.912 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.912 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.912 15:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.912 15:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.913 15:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.913 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.913 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.478 00:17:41.478 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.478 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.478 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.735 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.735 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.735 15:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.735 15:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.735 15:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.735 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.735 { 00:17:41.735 "cntlid": 81, 00:17:41.735 "qid": 0, 00:17:41.735 "state": "enabled", 00:17:41.735 "thread": "nvmf_tgt_poll_group_000", 00:17:41.735 "listen_address": { 00:17:41.735 "trtype": "TCP", 00:17:41.735 "adrfam": "IPv4", 00:17:41.735 "traddr": "10.0.0.2", 00:17:41.735 "trsvcid": "4420" 00:17:41.735 }, 00:17:41.735 "peer_address": { 00:17:41.735 "trtype": "TCP", 00:17:41.735 "adrfam": "IPv4", 00:17:41.735 "traddr": "10.0.0.1", 00:17:41.735 "trsvcid": "53338" 00:17:41.735 }, 00:17:41.735 "auth": { 00:17:41.735 "state": "completed", 00:17:41.735 "digest": "sha384", 00:17:41.735 "dhgroup": "ffdhe6144" 00:17:41.735 } 00:17:41.735 } 00:17:41.735 ]' 00:17:41.735 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.735 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.735 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.735 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.735 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.735 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.736 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.736 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.006 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:17:42.941 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.941 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:42.941 15:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.941 15:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.941 15:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.941 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.941 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.941 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:43.510 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:43.510 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.510 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:43.510 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:43.510 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:43.510 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.510 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.510 15:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.510 15:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.510 15:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.510 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.510 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.076 00:17:44.076 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.076 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.076 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.076 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.077 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.077 15:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.077 15:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.077 15:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.077 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.077 { 00:17:44.077 "cntlid": 83, 00:17:44.077 "qid": 0, 00:17:44.077 "state": "enabled", 00:17:44.077 "thread": "nvmf_tgt_poll_group_000", 00:17:44.077 "listen_address": { 00:17:44.077 "trtype": "TCP", 00:17:44.077 "adrfam": "IPv4", 00:17:44.077 "traddr": "10.0.0.2", 00:17:44.077 "trsvcid": "4420" 00:17:44.077 }, 00:17:44.077 "peer_address": { 00:17:44.077 "trtype": "TCP", 00:17:44.077 "adrfam": "IPv4", 00:17:44.077 "traddr": "10.0.0.1", 00:17:44.077 "trsvcid": "53364" 00:17:44.077 }, 00:17:44.077 "auth": { 00:17:44.077 "state": "completed", 00:17:44.077 "digest": "sha384", 00:17:44.077 "dhgroup": "ffdhe6144" 00:17:44.077 } 00:17:44.077 } 00:17:44.077 ]' 00:17:44.077 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.334 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.334 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.334 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.334 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.334 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.334 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.334 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.591 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:17:45.527 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.527 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.527 15:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.527 15:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.527 15:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.527 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.527 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.527 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.786 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:45.786 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.786 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.786 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:45.786 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:45.786 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.786 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.786 15:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.786 15:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.786 15:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.786 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.786 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.353 00:17:46.353 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.353 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.353 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.642 { 00:17:46.642 "cntlid": 85, 00:17:46.642 "qid": 0, 00:17:46.642 "state": "enabled", 00:17:46.642 "thread": "nvmf_tgt_poll_group_000", 00:17:46.642 "listen_address": { 00:17:46.642 "trtype": "TCP", 00:17:46.642 "adrfam": "IPv4", 00:17:46.642 "traddr": "10.0.0.2", 00:17:46.642 "trsvcid": "4420" 00:17:46.642 }, 00:17:46.642 "peer_address": { 00:17:46.642 "trtype": "TCP", 00:17:46.642 "adrfam": "IPv4", 00:17:46.642 "traddr": "10.0.0.1", 00:17:46.642 "trsvcid": "52180" 00:17:46.642 }, 00:17:46.642 "auth": { 00:17:46.642 "state": "completed", 00:17:46.642 "digest": "sha384", 00:17:46.642 "dhgroup": "ffdhe6144" 00:17:46.642 } 00:17:46.642 } 00:17:46.642 ]' 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.642 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.901 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:17:48.277 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.277 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:48.277 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.277 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.277 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.277 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.277 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:48.277 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:48.277 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:48.277 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.277 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:48.277 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:48.277 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.277 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.277 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:48.277 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.277 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.277 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.277 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.277 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.844 00:17:48.844 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.844 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.844 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.101 { 00:17:49.101 "cntlid": 87, 00:17:49.101 "qid": 0, 00:17:49.101 "state": "enabled", 00:17:49.101 "thread": "nvmf_tgt_poll_group_000", 00:17:49.101 "listen_address": { 00:17:49.101 "trtype": "TCP", 00:17:49.101 "adrfam": "IPv4", 00:17:49.101 "traddr": "10.0.0.2", 00:17:49.101 "trsvcid": "4420" 00:17:49.101 }, 00:17:49.101 "peer_address": { 00:17:49.101 "trtype": "TCP", 00:17:49.101 "adrfam": "IPv4", 00:17:49.101 "traddr": "10.0.0.1", 00:17:49.101 "trsvcid": "52204" 00:17:49.101 }, 00:17:49.101 "auth": { 00:17:49.101 "state": "completed", 00:17:49.101 "digest": "sha384", 00:17:49.101 "dhgroup": "ffdhe6144" 00:17:49.101 } 00:17:49.101 } 00:17:49.101 ]' 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.101 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.359 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:17:50.296 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.296 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:50.296 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.296 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.296 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.296 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.296 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.296 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:50.296 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:50.554 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:50.554 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.554 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:50.554 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:50.554 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.554 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.554 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.554 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.554 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.554 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.554 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.554 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.521 00:17:51.521 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.521 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.521 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.784 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.784 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.784 15:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.784 15:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.784 15:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.784 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.784 { 00:17:51.784 "cntlid": 89, 00:17:51.784 "qid": 0, 00:17:51.784 "state": "enabled", 00:17:51.784 "thread": "nvmf_tgt_poll_group_000", 00:17:51.784 "listen_address": { 00:17:51.784 "trtype": "TCP", 00:17:51.784 "adrfam": "IPv4", 00:17:51.784 "traddr": "10.0.0.2", 00:17:51.784 "trsvcid": "4420" 00:17:51.784 }, 00:17:51.784 "peer_address": { 00:17:51.784 "trtype": "TCP", 00:17:51.784 "adrfam": "IPv4", 00:17:51.784 "traddr": "10.0.0.1", 00:17:51.784 "trsvcid": "52224" 00:17:51.784 }, 00:17:51.784 "auth": { 00:17:51.784 "state": "completed", 00:17:51.784 "digest": "sha384", 00:17:51.784 "dhgroup": "ffdhe8192" 00:17:51.784 } 00:17:51.784 } 00:17:51.784 ]' 00:17:51.784 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.784 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.784 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.784 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.784 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.042 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.042 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.042 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.300 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:17:53.235 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.235 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.235 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.235 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.235 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.235 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.235 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.235 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.491 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:53.491 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.491 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:53.491 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:53.491 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:53.491 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.491 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.491 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.491 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.491 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.491 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.491 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.425 00:17:54.425 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.425 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.425 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.681 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.681 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.681 15:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.681 15:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.681 15:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.681 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.681 { 00:17:54.681 "cntlid": 91, 00:17:54.681 "qid": 0, 00:17:54.681 "state": "enabled", 00:17:54.681 "thread": "nvmf_tgt_poll_group_000", 00:17:54.681 "listen_address": { 00:17:54.681 "trtype": "TCP", 00:17:54.681 "adrfam": "IPv4", 00:17:54.681 "traddr": "10.0.0.2", 00:17:54.681 "trsvcid": "4420" 00:17:54.681 }, 00:17:54.681 "peer_address": { 00:17:54.681 "trtype": "TCP", 00:17:54.681 "adrfam": "IPv4", 00:17:54.681 "traddr": "10.0.0.1", 00:17:54.681 "trsvcid": "52258" 00:17:54.681 }, 00:17:54.681 "auth": { 00:17:54.681 "state": "completed", 00:17:54.681 "digest": "sha384", 00:17:54.681 "dhgroup": "ffdhe8192" 00:17:54.681 } 00:17:54.681 } 00:17:54.681 ]' 00:17:54.682 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.682 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.682 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.938 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.938 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.938 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.938 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.938 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.195 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:17:56.131 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.132 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.132 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.132 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.132 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.132 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.132 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:56.132 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:56.390 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:56.390 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.390 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:56.390 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:56.390 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:56.390 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.390 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.390 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.390 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.390 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.390 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.390 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.328 00:17:57.328 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.328 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.328 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.585 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.585 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.585 15:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.585 15:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.585 15:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.585 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.585 { 00:17:57.585 "cntlid": 93, 00:17:57.585 "qid": 0, 00:17:57.585 "state": "enabled", 00:17:57.585 "thread": "nvmf_tgt_poll_group_000", 00:17:57.585 "listen_address": { 00:17:57.585 "trtype": "TCP", 00:17:57.585 "adrfam": "IPv4", 00:17:57.585 "traddr": "10.0.0.2", 00:17:57.585 "trsvcid": "4420" 00:17:57.585 }, 00:17:57.585 "peer_address": { 00:17:57.585 "trtype": "TCP", 00:17:57.585 "adrfam": "IPv4", 00:17:57.585 "traddr": "10.0.0.1", 00:17:57.585 "trsvcid": "59512" 00:17:57.585 }, 00:17:57.585 "auth": { 00:17:57.585 "state": "completed", 00:17:57.585 "digest": "sha384", 00:17:57.585 "dhgroup": "ffdhe8192" 00:17:57.585 } 00:17:57.585 } 00:17:57.585 ]' 00:17:57.586 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.586 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.586 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.586 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.586 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.586 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.842 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.842 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.842 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:17:58.777 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.777 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.777 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.777 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.777 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.777 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.777 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:58.777 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:59.342 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:59.342 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.342 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.342 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:59.342 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.342 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.342 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:59.342 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.342 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.342 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.342 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.342 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.276 00:18:00.277 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.277 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.277 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.277 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.277 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.277 15:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.277 15:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.277 15:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.277 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.277 { 00:18:00.277 "cntlid": 95, 00:18:00.277 "qid": 0, 00:18:00.277 "state": "enabled", 00:18:00.277 "thread": "nvmf_tgt_poll_group_000", 00:18:00.277 "listen_address": { 00:18:00.277 "trtype": "TCP", 00:18:00.277 "adrfam": "IPv4", 00:18:00.277 "traddr": "10.0.0.2", 00:18:00.277 "trsvcid": "4420" 00:18:00.277 }, 00:18:00.277 "peer_address": { 00:18:00.277 "trtype": "TCP", 00:18:00.277 "adrfam": "IPv4", 00:18:00.277 "traddr": "10.0.0.1", 00:18:00.277 "trsvcid": "59530" 00:18:00.277 }, 00:18:00.277 "auth": { 00:18:00.277 "state": "completed", 00:18:00.277 "digest": "sha384", 00:18:00.277 "dhgroup": "ffdhe8192" 00:18:00.277 } 00:18:00.277 } 00:18:00.277 ]' 00:18:00.277 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.277 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.277 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.277 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.277 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.533 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.533 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.534 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.790 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:18:01.725 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.725 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.725 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.725 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.725 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.725 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:01.725 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.725 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.725 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:01.725 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:01.982 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:01.982 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.982 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.982 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:01.982 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.982 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.982 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.982 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.982 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.982 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.982 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.982 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.239 00:18:02.239 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.239 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.239 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.496 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.496 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.496 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.496 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.496 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.496 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.496 { 00:18:02.496 "cntlid": 97, 00:18:02.496 "qid": 0, 00:18:02.496 "state": "enabled", 00:18:02.496 "thread": "nvmf_tgt_poll_group_000", 00:18:02.496 "listen_address": { 00:18:02.496 "trtype": "TCP", 00:18:02.496 "adrfam": "IPv4", 00:18:02.496 "traddr": "10.0.0.2", 00:18:02.496 "trsvcid": "4420" 00:18:02.496 }, 00:18:02.496 "peer_address": { 00:18:02.496 "trtype": "TCP", 00:18:02.496 "adrfam": "IPv4", 00:18:02.496 "traddr": "10.0.0.1", 00:18:02.496 "trsvcid": "59558" 00:18:02.496 }, 00:18:02.496 "auth": { 00:18:02.496 "state": "completed", 00:18:02.496 "digest": "sha512", 00:18:02.496 "dhgroup": "null" 00:18:02.496 } 00:18:02.496 } 00:18:02.496 ]' 00:18:02.496 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.496 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.496 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.756 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:02.756 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.756 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.756 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.756 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.014 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:18:03.951 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.951 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.951 15:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.951 15:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.951 15:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.951 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.951 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:03.951 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:04.209 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:04.209 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.209 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.209 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:04.209 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:04.209 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.209 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.209 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.209 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.209 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.209 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.209 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.466 00:18:04.466 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.466 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.466 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.723 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.723 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.724 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.724 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.724 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.724 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.724 { 00:18:04.724 "cntlid": 99, 00:18:04.724 "qid": 0, 00:18:04.724 "state": "enabled", 00:18:04.724 "thread": "nvmf_tgt_poll_group_000", 00:18:04.724 "listen_address": { 00:18:04.724 "trtype": "TCP", 00:18:04.724 "adrfam": "IPv4", 00:18:04.724 "traddr": "10.0.0.2", 00:18:04.724 "trsvcid": "4420" 00:18:04.724 }, 00:18:04.724 "peer_address": { 00:18:04.724 "trtype": "TCP", 00:18:04.724 "adrfam": "IPv4", 00:18:04.724 "traddr": "10.0.0.1", 00:18:04.724 "trsvcid": "51622" 00:18:04.724 }, 00:18:04.724 "auth": { 00:18:04.724 "state": "completed", 00:18:04.724 "digest": "sha512", 00:18:04.724 "dhgroup": "null" 00:18:04.724 } 00:18:04.724 } 00:18:04.724 ]' 00:18:04.724 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.724 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.724 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.981 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:04.981 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.981 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.981 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.981 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.238 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:18:06.199 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.199 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.199 15:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.199 15:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.199 15:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.199 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.199 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:06.199 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:06.457 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:06.457 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.457 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.457 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:06.457 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:06.457 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.457 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.457 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.457 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.457 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.457 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.457 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.728 00:18:06.728 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.728 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.728 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.986 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.986 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.986 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.986 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.986 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.986 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.986 { 00:18:06.986 "cntlid": 101, 00:18:06.986 "qid": 0, 00:18:06.986 "state": "enabled", 00:18:06.986 "thread": "nvmf_tgt_poll_group_000", 00:18:06.986 "listen_address": { 00:18:06.986 "trtype": "TCP", 00:18:06.986 "adrfam": "IPv4", 00:18:06.986 "traddr": "10.0.0.2", 00:18:06.986 "trsvcid": "4420" 00:18:06.986 }, 00:18:06.986 "peer_address": { 00:18:06.986 "trtype": "TCP", 00:18:06.986 "adrfam": "IPv4", 00:18:06.986 "traddr": "10.0.0.1", 00:18:06.986 "trsvcid": "51646" 00:18:06.986 }, 00:18:06.986 "auth": { 00:18:06.986 "state": "completed", 00:18:06.986 "digest": "sha512", 00:18:06.986 "dhgroup": "null" 00:18:06.986 } 00:18:06.986 } 00:18:06.986 ]' 00:18:06.986 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.986 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.986 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.244 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:07.244 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.244 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.244 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.244 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.501 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:18:08.438 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.438 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.438 15:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.438 15:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.438 15:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.438 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.438 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:08.438 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:08.696 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:08.696 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.696 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:08.696 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:08.696 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:08.696 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.696 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:08.696 15:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.696 15:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.696 15:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.696 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.696 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.953 00:18:08.953 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.953 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.953 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.210 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.210 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.210 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.210 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.210 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.210 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.210 { 00:18:09.210 "cntlid": 103, 00:18:09.210 "qid": 0, 00:18:09.210 "state": "enabled", 00:18:09.210 "thread": "nvmf_tgt_poll_group_000", 00:18:09.210 "listen_address": { 00:18:09.210 "trtype": "TCP", 00:18:09.210 "adrfam": "IPv4", 00:18:09.210 "traddr": "10.0.0.2", 00:18:09.210 "trsvcid": "4420" 00:18:09.210 }, 00:18:09.210 "peer_address": { 00:18:09.210 "trtype": "TCP", 00:18:09.210 "adrfam": "IPv4", 00:18:09.210 "traddr": "10.0.0.1", 00:18:09.210 "trsvcid": "51668" 00:18:09.210 }, 00:18:09.210 "auth": { 00:18:09.210 "state": "completed", 00:18:09.210 "digest": "sha512", 00:18:09.210 "dhgroup": "null" 00:18:09.210 } 00:18:09.210 } 00:18:09.210 ]' 00:18:09.210 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.210 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.210 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.210 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:09.210 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.467 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.467 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.467 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.725 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:18:10.660 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.660 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:10.660 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.660 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.660 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.660 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.660 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.660 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:10.660 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:10.917 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:10.917 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.917 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.917 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:10.917 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:10.917 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.917 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.917 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.918 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.918 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.918 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.918 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.177 00:18:11.435 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.435 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.435 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.693 { 00:18:11.693 "cntlid": 105, 00:18:11.693 "qid": 0, 00:18:11.693 "state": "enabled", 00:18:11.693 "thread": "nvmf_tgt_poll_group_000", 00:18:11.693 "listen_address": { 00:18:11.693 "trtype": "TCP", 00:18:11.693 "adrfam": "IPv4", 00:18:11.693 "traddr": "10.0.0.2", 00:18:11.693 "trsvcid": "4420" 00:18:11.693 }, 00:18:11.693 "peer_address": { 00:18:11.693 "trtype": "TCP", 00:18:11.693 "adrfam": "IPv4", 00:18:11.693 "traddr": "10.0.0.1", 00:18:11.693 "trsvcid": "51700" 00:18:11.693 }, 00:18:11.693 "auth": { 00:18:11.693 "state": "completed", 00:18:11.693 "digest": "sha512", 00:18:11.693 "dhgroup": "ffdhe2048" 00:18:11.693 } 00:18:11.693 } 00:18:11.693 ]' 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.693 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.951 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:18:12.885 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.885 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.885 15:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.885 15:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.885 15:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.885 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.885 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:12.885 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:13.143 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:13.143 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.143 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:13.143 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:13.143 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:13.143 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.143 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.143 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.143 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.143 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.143 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.143 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.711 00:18:13.711 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.711 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.712 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.970 { 00:18:13.970 "cntlid": 107, 00:18:13.970 "qid": 0, 00:18:13.970 "state": "enabled", 00:18:13.970 "thread": "nvmf_tgt_poll_group_000", 00:18:13.970 "listen_address": { 00:18:13.970 "trtype": "TCP", 00:18:13.970 "adrfam": "IPv4", 00:18:13.970 "traddr": "10.0.0.2", 00:18:13.970 "trsvcid": "4420" 00:18:13.970 }, 00:18:13.970 "peer_address": { 00:18:13.970 "trtype": "TCP", 00:18:13.970 "adrfam": "IPv4", 00:18:13.970 "traddr": "10.0.0.1", 00:18:13.970 "trsvcid": "51730" 00:18:13.970 }, 00:18:13.970 "auth": { 00:18:13.970 "state": "completed", 00:18:13.970 "digest": "sha512", 00:18:13.970 "dhgroup": "ffdhe2048" 00:18:13.970 } 00:18:13.970 } 00:18:13.970 ]' 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.970 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.228 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:18:15.164 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.164 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.164 15:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.164 15:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.164 15:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.164 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.164 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:15.164 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:15.422 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:15.422 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.422 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.422 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:15.422 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:15.422 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.422 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.422 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.422 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.422 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.422 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.422 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.679 00:18:15.680 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.680 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.680 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.936 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.936 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.936 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.936 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.936 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.936 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.936 { 00:18:15.936 "cntlid": 109, 00:18:15.936 "qid": 0, 00:18:15.936 "state": "enabled", 00:18:15.936 "thread": "nvmf_tgt_poll_group_000", 00:18:15.936 "listen_address": { 00:18:15.936 "trtype": "TCP", 00:18:15.936 "adrfam": "IPv4", 00:18:15.936 "traddr": "10.0.0.2", 00:18:15.936 "trsvcid": "4420" 00:18:15.936 }, 00:18:15.936 "peer_address": { 00:18:15.936 "trtype": "TCP", 00:18:15.936 "adrfam": "IPv4", 00:18:15.936 "traddr": "10.0.0.1", 00:18:15.936 "trsvcid": "55388" 00:18:15.936 }, 00:18:15.936 "auth": { 00:18:15.936 "state": "completed", 00:18:15.936 "digest": "sha512", 00:18:15.936 "dhgroup": "ffdhe2048" 00:18:15.936 } 00:18:15.936 } 00:18:15.936 ]' 00:18:15.936 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.194 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.194 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.194 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.194 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.194 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.194 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.194 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.458 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:18:17.391 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.391 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.391 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.391 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.391 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.391 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.391 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:17.391 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:17.648 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:17.648 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.648 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.648 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:17.648 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.648 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.648 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:17.648 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.648 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.648 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.648 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.648 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.906 00:18:17.906 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.906 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.906 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.163 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.163 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.163 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.163 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.163 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.163 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.163 { 00:18:18.163 "cntlid": 111, 00:18:18.163 "qid": 0, 00:18:18.163 "state": "enabled", 00:18:18.163 "thread": "nvmf_tgt_poll_group_000", 00:18:18.163 "listen_address": { 00:18:18.163 "trtype": "TCP", 00:18:18.163 "adrfam": "IPv4", 00:18:18.163 "traddr": "10.0.0.2", 00:18:18.163 "trsvcid": "4420" 00:18:18.163 }, 00:18:18.163 "peer_address": { 00:18:18.163 "trtype": "TCP", 00:18:18.163 "adrfam": "IPv4", 00:18:18.163 "traddr": "10.0.0.1", 00:18:18.163 "trsvcid": "55420" 00:18:18.163 }, 00:18:18.163 "auth": { 00:18:18.163 "state": "completed", 00:18:18.163 "digest": "sha512", 00:18:18.163 "dhgroup": "ffdhe2048" 00:18:18.163 } 00:18:18.163 } 00:18:18.163 ]' 00:18:18.163 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.163 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.163 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.163 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:18.163 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.422 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.422 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.422 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.680 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.688 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.257 00:18:20.257 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.257 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.257 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.257 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.257 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.257 15:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.257 15:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.516 15:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.516 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.516 { 00:18:20.516 "cntlid": 113, 00:18:20.516 "qid": 0, 00:18:20.516 "state": "enabled", 00:18:20.516 "thread": "nvmf_tgt_poll_group_000", 00:18:20.516 "listen_address": { 00:18:20.516 "trtype": "TCP", 00:18:20.516 "adrfam": "IPv4", 00:18:20.516 "traddr": "10.0.0.2", 00:18:20.516 "trsvcid": "4420" 00:18:20.516 }, 00:18:20.516 "peer_address": { 00:18:20.516 "trtype": "TCP", 00:18:20.516 "adrfam": "IPv4", 00:18:20.516 "traddr": "10.0.0.1", 00:18:20.516 "trsvcid": "55442" 00:18:20.516 }, 00:18:20.516 "auth": { 00:18:20.516 "state": "completed", 00:18:20.516 "digest": "sha512", 00:18:20.516 "dhgroup": "ffdhe3072" 00:18:20.516 } 00:18:20.516 } 00:18:20.516 ]' 00:18:20.516 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.516 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.516 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.516 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:20.516 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.516 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.516 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.516 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.773 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:18:21.711 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.711 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.711 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.711 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.711 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.711 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.711 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:21.711 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:21.969 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:21.969 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.969 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.969 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:21.969 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.969 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.969 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.969 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.969 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.969 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.969 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.969 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.537 00:18:22.537 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.537 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.537 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.795 { 00:18:22.795 "cntlid": 115, 00:18:22.795 "qid": 0, 00:18:22.795 "state": "enabled", 00:18:22.795 "thread": "nvmf_tgt_poll_group_000", 00:18:22.795 "listen_address": { 00:18:22.795 "trtype": "TCP", 00:18:22.795 "adrfam": "IPv4", 00:18:22.795 "traddr": "10.0.0.2", 00:18:22.795 "trsvcid": "4420" 00:18:22.795 }, 00:18:22.795 "peer_address": { 00:18:22.795 "trtype": "TCP", 00:18:22.795 "adrfam": "IPv4", 00:18:22.795 "traddr": "10.0.0.1", 00:18:22.795 "trsvcid": "55474" 00:18:22.795 }, 00:18:22.795 "auth": { 00:18:22.795 "state": "completed", 00:18:22.795 "digest": "sha512", 00:18:22.795 "dhgroup": "ffdhe3072" 00:18:22.795 } 00:18:22.795 } 00:18:22.795 ]' 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.795 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.053 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:18:23.988 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.988 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.988 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.988 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.988 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.988 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.988 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:23.988 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:24.246 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:24.246 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.246 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:24.247 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:24.247 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:24.247 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.247 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.247 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.247 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.247 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.247 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.247 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.815 00:18:24.815 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.815 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.815 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.073 { 00:18:25.073 "cntlid": 117, 00:18:25.073 "qid": 0, 00:18:25.073 "state": "enabled", 00:18:25.073 "thread": "nvmf_tgt_poll_group_000", 00:18:25.073 "listen_address": { 00:18:25.073 "trtype": "TCP", 00:18:25.073 "adrfam": "IPv4", 00:18:25.073 "traddr": "10.0.0.2", 00:18:25.073 "trsvcid": "4420" 00:18:25.073 }, 00:18:25.073 "peer_address": { 00:18:25.073 "trtype": "TCP", 00:18:25.073 "adrfam": "IPv4", 00:18:25.073 "traddr": "10.0.0.1", 00:18:25.073 "trsvcid": "38032" 00:18:25.073 }, 00:18:25.073 "auth": { 00:18:25.073 "state": "completed", 00:18:25.073 "digest": "sha512", 00:18:25.073 "dhgroup": "ffdhe3072" 00:18:25.073 } 00:18:25.073 } 00:18:25.073 ]' 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.073 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.331 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:18:26.266 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.266 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.266 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.266 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.266 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.266 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.266 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:26.266 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:26.523 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:26.523 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.523 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:26.523 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:26.523 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.523 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.523 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:26.523 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.523 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.523 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.523 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.523 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.091 00:18:27.091 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.091 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.091 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.349 { 00:18:27.349 "cntlid": 119, 00:18:27.349 "qid": 0, 00:18:27.349 "state": "enabled", 00:18:27.349 "thread": "nvmf_tgt_poll_group_000", 00:18:27.349 "listen_address": { 00:18:27.349 "trtype": "TCP", 00:18:27.349 "adrfam": "IPv4", 00:18:27.349 "traddr": "10.0.0.2", 00:18:27.349 "trsvcid": "4420" 00:18:27.349 }, 00:18:27.349 "peer_address": { 00:18:27.349 "trtype": "TCP", 00:18:27.349 "adrfam": "IPv4", 00:18:27.349 "traddr": "10.0.0.1", 00:18:27.349 "trsvcid": "38054" 00:18:27.349 }, 00:18:27.349 "auth": { 00:18:27.349 "state": "completed", 00:18:27.349 "digest": "sha512", 00:18:27.349 "dhgroup": "ffdhe3072" 00:18:27.349 } 00:18:27.349 } 00:18:27.349 ]' 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.349 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.606 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:18:28.541 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.541 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:28.541 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.541 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.541 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.541 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.541 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.541 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:28.541 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:28.800 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:28.801 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.801 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.801 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:28.801 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:28.801 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.801 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.801 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.801 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.801 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.801 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.801 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.369 00:18:29.369 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.369 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.369 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.627 { 00:18:29.627 "cntlid": 121, 00:18:29.627 "qid": 0, 00:18:29.627 "state": "enabled", 00:18:29.627 "thread": "nvmf_tgt_poll_group_000", 00:18:29.627 "listen_address": { 00:18:29.627 "trtype": "TCP", 00:18:29.627 "adrfam": "IPv4", 00:18:29.627 "traddr": "10.0.0.2", 00:18:29.627 "trsvcid": "4420" 00:18:29.627 }, 00:18:29.627 "peer_address": { 00:18:29.627 "trtype": "TCP", 00:18:29.627 "adrfam": "IPv4", 00:18:29.627 "traddr": "10.0.0.1", 00:18:29.627 "trsvcid": "38086" 00:18:29.627 }, 00:18:29.627 "auth": { 00:18:29.627 "state": "completed", 00:18:29.627 "digest": "sha512", 00:18:29.627 "dhgroup": "ffdhe4096" 00:18:29.627 } 00:18:29.627 } 00:18:29.627 ]' 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.627 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.885 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:18:30.823 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.823 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.823 15:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.823 15:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.823 15:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.823 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.823 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:30.823 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:31.081 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:31.081 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.081 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.081 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:31.081 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:31.081 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.081 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.081 15:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.081 15:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.081 15:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.081 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.081 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.647 00:18:31.647 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.647 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.647 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.647 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.647 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.647 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.647 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.647 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.647 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.647 { 00:18:31.647 "cntlid": 123, 00:18:31.647 "qid": 0, 00:18:31.647 "state": "enabled", 00:18:31.647 "thread": "nvmf_tgt_poll_group_000", 00:18:31.647 "listen_address": { 00:18:31.647 "trtype": "TCP", 00:18:31.647 "adrfam": "IPv4", 00:18:31.647 "traddr": "10.0.0.2", 00:18:31.647 "trsvcid": "4420" 00:18:31.647 }, 00:18:31.647 "peer_address": { 00:18:31.647 "trtype": "TCP", 00:18:31.647 "adrfam": "IPv4", 00:18:31.647 "traddr": "10.0.0.1", 00:18:31.647 "trsvcid": "38114" 00:18:31.647 }, 00:18:31.647 "auth": { 00:18:31.647 "state": "completed", 00:18:31.647 "digest": "sha512", 00:18:31.647 "dhgroup": "ffdhe4096" 00:18:31.647 } 00:18:31.647 } 00:18:31.647 ]' 00:18:31.647 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.904 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.905 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.905 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.905 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.905 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.905 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.905 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.163 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:18:33.098 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.098 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.098 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.098 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.098 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.098 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.098 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:33.098 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:33.369 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:33.369 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.369 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:33.369 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:33.369 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:33.369 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.369 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.369 16:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.369 16:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.369 16:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.369 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.369 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.678 00:18:33.678 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.678 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.678 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.935 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.935 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.935 16:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.935 16:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.935 16:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.935 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.935 { 00:18:33.935 "cntlid": 125, 00:18:33.935 "qid": 0, 00:18:33.935 "state": "enabled", 00:18:33.935 "thread": "nvmf_tgt_poll_group_000", 00:18:33.935 "listen_address": { 00:18:33.935 "trtype": "TCP", 00:18:33.935 "adrfam": "IPv4", 00:18:33.935 "traddr": "10.0.0.2", 00:18:33.935 "trsvcid": "4420" 00:18:33.935 }, 00:18:33.935 "peer_address": { 00:18:33.935 "trtype": "TCP", 00:18:33.935 "adrfam": "IPv4", 00:18:33.935 "traddr": "10.0.0.1", 00:18:33.935 "trsvcid": "38144" 00:18:33.935 }, 00:18:33.935 "auth": { 00:18:33.935 "state": "completed", 00:18:33.935 "digest": "sha512", 00:18:33.935 "dhgroup": "ffdhe4096" 00:18:33.935 } 00:18:33.935 } 00:18:33.935 ]' 00:18:33.935 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.192 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.192 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.192 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.192 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.192 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.192 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.192 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.448 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:18:35.378 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.378 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.378 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.378 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.378 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.378 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.378 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:35.378 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:35.635 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:35.635 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.635 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:35.635 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:35.635 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.635 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.635 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:35.635 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.635 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.635 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.635 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.635 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.202 00:18:36.202 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.202 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.202 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.202 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.202 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.202 16:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.202 16:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.202 16:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.202 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.202 { 00:18:36.202 "cntlid": 127, 00:18:36.202 "qid": 0, 00:18:36.202 "state": "enabled", 00:18:36.202 "thread": "nvmf_tgt_poll_group_000", 00:18:36.202 "listen_address": { 00:18:36.202 "trtype": "TCP", 00:18:36.202 "adrfam": "IPv4", 00:18:36.202 "traddr": "10.0.0.2", 00:18:36.202 "trsvcid": "4420" 00:18:36.202 }, 00:18:36.202 "peer_address": { 00:18:36.202 "trtype": "TCP", 00:18:36.202 "adrfam": "IPv4", 00:18:36.202 "traddr": "10.0.0.1", 00:18:36.202 "trsvcid": "37798" 00:18:36.202 }, 00:18:36.202 "auth": { 00:18:36.202 "state": "completed", 00:18:36.202 "digest": "sha512", 00:18:36.202 "dhgroup": "ffdhe4096" 00:18:36.202 } 00:18:36.202 } 00:18:36.202 ]' 00:18:36.202 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.459 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.459 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.459 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:36.459 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.459 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.459 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.459 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.716 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:18:37.650 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.650 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.650 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.650 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.650 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.650 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.650 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.650 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:37.650 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:37.908 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:37.908 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.908 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:37.908 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:37.908 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.908 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.908 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.908 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.908 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.908 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.908 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.908 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.474 00:18:38.474 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.474 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.474 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.731 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.732 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.732 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.732 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.732 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.732 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.732 { 00:18:38.732 "cntlid": 129, 00:18:38.732 "qid": 0, 00:18:38.732 "state": "enabled", 00:18:38.732 "thread": "nvmf_tgt_poll_group_000", 00:18:38.732 "listen_address": { 00:18:38.732 "trtype": "TCP", 00:18:38.732 "adrfam": "IPv4", 00:18:38.732 "traddr": "10.0.0.2", 00:18:38.732 "trsvcid": "4420" 00:18:38.732 }, 00:18:38.732 "peer_address": { 00:18:38.732 "trtype": "TCP", 00:18:38.732 "adrfam": "IPv4", 00:18:38.732 "traddr": "10.0.0.1", 00:18:38.732 "trsvcid": "37826" 00:18:38.732 }, 00:18:38.732 "auth": { 00:18:38.732 "state": "completed", 00:18:38.732 "digest": "sha512", 00:18:38.732 "dhgroup": "ffdhe6144" 00:18:38.732 } 00:18:38.732 } 00:18:38.732 ]' 00:18:38.732 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.732 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.732 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.732 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.732 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.989 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.989 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.989 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.247 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:18:40.181 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.181 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.181 16:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.181 16:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.181 16:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.181 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.181 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:40.181 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:40.439 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:40.439 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.439 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.439 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:40.439 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:40.439 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.439 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.439 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.439 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.439 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.439 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.439 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.004 00:18:41.005 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.005 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.005 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.261 { 00:18:41.261 "cntlid": 131, 00:18:41.261 "qid": 0, 00:18:41.261 "state": "enabled", 00:18:41.261 "thread": "nvmf_tgt_poll_group_000", 00:18:41.261 "listen_address": { 00:18:41.261 "trtype": "TCP", 00:18:41.261 "adrfam": "IPv4", 00:18:41.261 "traddr": "10.0.0.2", 00:18:41.261 "trsvcid": "4420" 00:18:41.261 }, 00:18:41.261 "peer_address": { 00:18:41.261 "trtype": "TCP", 00:18:41.261 "adrfam": "IPv4", 00:18:41.261 "traddr": "10.0.0.1", 00:18:41.261 "trsvcid": "37852" 00:18:41.261 }, 00:18:41.261 "auth": { 00:18:41.261 "state": "completed", 00:18:41.261 "digest": "sha512", 00:18:41.261 "dhgroup": "ffdhe6144" 00:18:41.261 } 00:18:41.261 } 00:18:41.261 ]' 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.261 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.518 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:18:42.448 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.448 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.448 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.448 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.448 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.448 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.448 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:42.448 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:42.706 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:42.706 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.706 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.706 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:42.706 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:42.706 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.706 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.706 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.706 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.706 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.706 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.706 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.272 00:18:43.272 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.272 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.272 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.529 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.529 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.529 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.529 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.529 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.529 { 00:18:43.529 "cntlid": 133, 00:18:43.529 "qid": 0, 00:18:43.529 "state": "enabled", 00:18:43.529 "thread": "nvmf_tgt_poll_group_000", 00:18:43.529 "listen_address": { 00:18:43.529 "trtype": "TCP", 00:18:43.529 "adrfam": "IPv4", 00:18:43.529 "traddr": "10.0.0.2", 00:18:43.529 "trsvcid": "4420" 00:18:43.529 }, 00:18:43.529 "peer_address": { 00:18:43.529 "trtype": "TCP", 00:18:43.529 "adrfam": "IPv4", 00:18:43.529 "traddr": "10.0.0.1", 00:18:43.529 "trsvcid": "37872" 00:18:43.529 }, 00:18:43.529 "auth": { 00:18:43.529 "state": "completed", 00:18:43.529 "digest": "sha512", 00:18:43.529 "dhgroup": "ffdhe6144" 00:18:43.529 } 00:18:43.529 } 00:18:43.529 ]' 00:18:43.529 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.529 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.529 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.786 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:43.787 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.787 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.787 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.787 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.045 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:18:44.977 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.977 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.977 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.977 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.977 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.977 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.977 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:44.977 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:45.234 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:45.234 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.234 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.234 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:45.234 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.234 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.234 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:45.234 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.234 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.234 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.234 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.234 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.797 00:18:45.797 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.798 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.798 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.054 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.054 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.055 16:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.055 16:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.055 16:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.055 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.055 { 00:18:46.055 "cntlid": 135, 00:18:46.055 "qid": 0, 00:18:46.055 "state": "enabled", 00:18:46.055 "thread": "nvmf_tgt_poll_group_000", 00:18:46.055 "listen_address": { 00:18:46.055 "trtype": "TCP", 00:18:46.055 "adrfam": "IPv4", 00:18:46.055 "traddr": "10.0.0.2", 00:18:46.055 "trsvcid": "4420" 00:18:46.055 }, 00:18:46.055 "peer_address": { 00:18:46.055 "trtype": "TCP", 00:18:46.055 "adrfam": "IPv4", 00:18:46.055 "traddr": "10.0.0.1", 00:18:46.055 "trsvcid": "55416" 00:18:46.055 }, 00:18:46.055 "auth": { 00:18:46.055 "state": "completed", 00:18:46.055 "digest": "sha512", 00:18:46.055 "dhgroup": "ffdhe6144" 00:18:46.055 } 00:18:46.055 } 00:18:46.055 ]' 00:18:46.055 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.055 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.055 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.055 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:46.055 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.055 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.055 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.055 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.313 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.709 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.643 00:18:48.643 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.643 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.643 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.901 { 00:18:48.901 "cntlid": 137, 00:18:48.901 "qid": 0, 00:18:48.901 "state": "enabled", 00:18:48.901 "thread": "nvmf_tgt_poll_group_000", 00:18:48.901 "listen_address": { 00:18:48.901 "trtype": "TCP", 00:18:48.901 "adrfam": "IPv4", 00:18:48.901 "traddr": "10.0.0.2", 00:18:48.901 "trsvcid": "4420" 00:18:48.901 }, 00:18:48.901 "peer_address": { 00:18:48.901 "trtype": "TCP", 00:18:48.901 "adrfam": "IPv4", 00:18:48.901 "traddr": "10.0.0.1", 00:18:48.901 "trsvcid": "55434" 00:18:48.901 }, 00:18:48.901 "auth": { 00:18:48.901 "state": "completed", 00:18:48.901 "digest": "sha512", 00:18:48.901 "dhgroup": "ffdhe8192" 00:18:48.901 } 00:18:48.901 } 00:18:48.901 ]' 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.901 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.158 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:18:50.088 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.088 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.088 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.088 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.088 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.088 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.088 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:50.088 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:50.653 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:50.653 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.653 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.653 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.653 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:50.653 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.653 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.653 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.653 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.653 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.653 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.653 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.217 00:18:51.473 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.473 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.473 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.473 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.473 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.473 16:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.473 16:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.732 16:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.732 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.732 { 00:18:51.732 "cntlid": 139, 00:18:51.732 "qid": 0, 00:18:51.732 "state": "enabled", 00:18:51.732 "thread": "nvmf_tgt_poll_group_000", 00:18:51.732 "listen_address": { 00:18:51.732 "trtype": "TCP", 00:18:51.732 "adrfam": "IPv4", 00:18:51.732 "traddr": "10.0.0.2", 00:18:51.732 "trsvcid": "4420" 00:18:51.732 }, 00:18:51.732 "peer_address": { 00:18:51.732 "trtype": "TCP", 00:18:51.732 "adrfam": "IPv4", 00:18:51.732 "traddr": "10.0.0.1", 00:18:51.732 "trsvcid": "55464" 00:18:51.732 }, 00:18:51.732 "auth": { 00:18:51.732 "state": "completed", 00:18:51.732 "digest": "sha512", 00:18:51.732 "dhgroup": "ffdhe8192" 00:18:51.732 } 00:18:51.732 } 00:18:51.732 ]' 00:18:51.732 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.732 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.732 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.732 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.732 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.732 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.732 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.732 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.989 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDhiZjM1ZWVjMTkyMzNjNmY0YjQzMjM4ZDYwZTNiMziaYfqz: --dhchap-ctrl-secret DHHC-1:02:MWI5YjUyYzBmNzBkMzg1MTU4NTU4YzljY2E5YWY4YjlmYmM1ZTE5Yjc5YzE3MTFmiHJrXQ==: 00:18:52.919 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.919 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.919 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.919 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.919 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.919 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.919 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:52.919 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:53.175 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:53.175 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.175 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:53.175 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:53.175 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:53.175 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.175 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.175 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.175 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.175 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.175 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.175 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.111 00:18:54.111 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.111 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.111 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.371 { 00:18:54.371 "cntlid": 141, 00:18:54.371 "qid": 0, 00:18:54.371 "state": "enabled", 00:18:54.371 "thread": "nvmf_tgt_poll_group_000", 00:18:54.371 "listen_address": { 00:18:54.371 "trtype": "TCP", 00:18:54.371 "adrfam": "IPv4", 00:18:54.371 "traddr": "10.0.0.2", 00:18:54.371 "trsvcid": "4420" 00:18:54.371 }, 00:18:54.371 "peer_address": { 00:18:54.371 "trtype": "TCP", 00:18:54.371 "adrfam": "IPv4", 00:18:54.371 "traddr": "10.0.0.1", 00:18:54.371 "trsvcid": "55496" 00:18:54.371 }, 00:18:54.371 "auth": { 00:18:54.371 "state": "completed", 00:18:54.371 "digest": "sha512", 00:18:54.371 "dhgroup": "ffdhe8192" 00:18:54.371 } 00:18:54.371 } 00:18:54.371 ]' 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.371 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.629 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDFlZjEwZDc0OGNhNWVhNzQ5OTZiNGExZGRhMTY1NDNkN2U1N2QwYzRjNjcyMjcwnmcKYw==: --dhchap-ctrl-secret DHHC-1:01:ZjZjNDhhNTMwZDBkMzY3MzRkZWMwZjY0ZjFiOWI1MDJm8xZl: 00:18:55.560 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.560 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.560 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.560 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.560 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.560 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:55.560 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:55.819 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:55.819 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.819 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:55.819 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:55.819 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.819 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.819 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:55.819 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.819 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.819 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.819 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.819 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.749 00:18:56.749 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.749 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.749 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.007 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.007 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.007 16:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.007 16:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.007 16:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.007 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.007 { 00:18:57.007 "cntlid": 143, 00:18:57.007 "qid": 0, 00:18:57.007 "state": "enabled", 00:18:57.007 "thread": "nvmf_tgt_poll_group_000", 00:18:57.007 "listen_address": { 00:18:57.007 "trtype": "TCP", 00:18:57.007 "adrfam": "IPv4", 00:18:57.007 "traddr": "10.0.0.2", 00:18:57.007 "trsvcid": "4420" 00:18:57.007 }, 00:18:57.007 "peer_address": { 00:18:57.007 "trtype": "TCP", 00:18:57.007 "adrfam": "IPv4", 00:18:57.007 "traddr": "10.0.0.1", 00:18:57.007 "trsvcid": "51680" 00:18:57.007 }, 00:18:57.007 "auth": { 00:18:57.007 "state": "completed", 00:18:57.007 "digest": "sha512", 00:18:57.007 "dhgroup": "ffdhe8192" 00:18:57.007 } 00:18:57.007 } 00:18:57.007 ]' 00:18:57.007 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.265 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.265 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.265 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.265 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.265 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.265 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.265 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.522 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:18:58.454 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.454 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.454 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.454 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.454 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.454 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:58.454 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:58.454 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:58.454 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:58.454 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:58.454 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:58.712 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:58.712 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.712 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.712 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:58.712 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:58.712 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.712 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.712 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.712 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.712 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.712 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.712 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.644 00:18:59.644 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.644 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.644 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.902 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.902 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.902 16:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.902 16:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.902 16:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.902 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.902 { 00:18:59.902 "cntlid": 145, 00:18:59.902 "qid": 0, 00:18:59.902 "state": "enabled", 00:18:59.902 "thread": "nvmf_tgt_poll_group_000", 00:18:59.902 "listen_address": { 00:18:59.902 "trtype": "TCP", 00:18:59.902 "adrfam": "IPv4", 00:18:59.902 "traddr": "10.0.0.2", 00:18:59.902 "trsvcid": "4420" 00:18:59.902 }, 00:18:59.902 "peer_address": { 00:18:59.902 "trtype": "TCP", 00:18:59.902 "adrfam": "IPv4", 00:18:59.902 "traddr": "10.0.0.1", 00:18:59.902 "trsvcid": "51712" 00:18:59.902 }, 00:18:59.902 "auth": { 00:18:59.902 "state": "completed", 00:18:59.902 "digest": "sha512", 00:18:59.902 "dhgroup": "ffdhe8192" 00:18:59.902 } 00:18:59.902 } 00:18:59.902 ]' 00:18:59.902 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.902 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.161 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.161 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.161 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.161 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.161 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.161 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.419 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWRhZjBmNGFkYjhkZmM4MmI2ZmUwMmQwNGNkNDIzMDUzM2ZhY2M0YmZkNjkxYjg5B3ZrQQ==: --dhchap-ctrl-secret DHHC-1:03:YWRhNTUxN2Y2MjkxYzkxZGRlODdmN2RhZDU3YmJlODA4YzExYWY1MDlkYmY2Njk3MGUzMDg4NGU0YWYyMDYwZcY8tNE=: 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:01.351 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:02.338 request: 00:19:02.338 { 00:19:02.338 "name": "nvme0", 00:19:02.338 "trtype": "tcp", 00:19:02.338 "traddr": "10.0.0.2", 00:19:02.338 "adrfam": "ipv4", 00:19:02.338 "trsvcid": "4420", 00:19:02.338 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:02.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:02.338 "prchk_reftag": false, 00:19:02.338 "prchk_guard": false, 00:19:02.338 "hdgst": false, 00:19:02.338 "ddgst": false, 00:19:02.338 "dhchap_key": "key2", 00:19:02.338 "method": "bdev_nvme_attach_controller", 00:19:02.338 "req_id": 1 00:19:02.338 } 00:19:02.338 Got JSON-RPC error response 00:19:02.338 response: 00:19:02.338 { 00:19:02.338 "code": -5, 00:19:02.338 "message": "Input/output error" 00:19:02.338 } 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:02.338 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:03.271 request: 00:19:03.271 { 00:19:03.271 "name": "nvme0", 00:19:03.271 "trtype": "tcp", 00:19:03.271 "traddr": "10.0.0.2", 00:19:03.271 "adrfam": "ipv4", 00:19:03.271 "trsvcid": "4420", 00:19:03.271 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:03.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:03.271 "prchk_reftag": false, 00:19:03.271 "prchk_guard": false, 00:19:03.271 "hdgst": false, 00:19:03.271 "ddgst": false, 00:19:03.271 "dhchap_key": "key1", 00:19:03.271 "dhchap_ctrlr_key": "ckey2", 00:19:03.271 "method": "bdev_nvme_attach_controller", 00:19:03.271 "req_id": 1 00:19:03.271 } 00:19:03.271 Got JSON-RPC error response 00:19:03.271 response: 00:19:03.271 { 00:19:03.271 "code": -5, 00:19:03.271 "message": "Input/output error" 00:19:03.271 } 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.271 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.837 request: 00:19:03.837 { 00:19:03.837 "name": "nvme0", 00:19:03.837 "trtype": "tcp", 00:19:03.837 "traddr": "10.0.0.2", 00:19:03.837 "adrfam": "ipv4", 00:19:03.837 "trsvcid": "4420", 00:19:03.837 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:03.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:03.837 "prchk_reftag": false, 00:19:03.837 "prchk_guard": false, 00:19:03.837 "hdgst": false, 00:19:03.837 "ddgst": false, 00:19:03.837 "dhchap_key": "key1", 00:19:03.837 "dhchap_ctrlr_key": "ckey1", 00:19:03.837 "method": "bdev_nvme_attach_controller", 00:19:03.837 "req_id": 1 00:19:03.837 } 00:19:03.837 Got JSON-RPC error response 00:19:03.837 response: 00:19:03.837 { 00:19:03.837 "code": -5, 00:19:03.837 "message": "Input/output error" 00:19:03.837 } 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1147672 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1147672 ']' 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1147672 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:03.837 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1147672 00:19:04.094 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:04.094 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:04.094 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1147672' 00:19:04.094 killing process with pid 1147672 00:19:04.094 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1147672 00:19:04.094 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1147672 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1170508 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1170508 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1170508 ']' 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.352 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1170508 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1170508 ']' 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.609 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.867 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.867 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:04.867 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:04.867 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.867 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.124 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.054 00:19:06.054 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.054 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.054 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.054 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.054 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.054 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.054 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.054 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.054 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.054 { 00:19:06.054 "cntlid": 1, 00:19:06.054 "qid": 0, 00:19:06.054 "state": "enabled", 00:19:06.054 "thread": "nvmf_tgt_poll_group_000", 00:19:06.054 "listen_address": { 00:19:06.054 "trtype": "TCP", 00:19:06.054 "adrfam": "IPv4", 00:19:06.054 "traddr": "10.0.0.2", 00:19:06.054 "trsvcid": "4420" 00:19:06.054 }, 00:19:06.054 "peer_address": { 00:19:06.054 "trtype": "TCP", 00:19:06.054 "adrfam": "IPv4", 00:19:06.054 "traddr": "10.0.0.1", 00:19:06.054 "trsvcid": "55236" 00:19:06.054 }, 00:19:06.054 "auth": { 00:19:06.054 "state": "completed", 00:19:06.054 "digest": "sha512", 00:19:06.054 "dhgroup": "ffdhe8192" 00:19:06.054 } 00:19:06.054 } 00:19:06.054 ]' 00:19:06.054 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.312 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.312 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.312 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.312 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.312 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.312 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.312 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.569 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzYyZDBkOWUwYjA2MjYxZGE4Y2UxYzRkNWJjMmRmZWU1YTk0ZTJjMTkyMDM4ODUzYTVmMzQ5ODAzNzU2OGM0Y94sbIo=: 00:19:07.502 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.502 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:07.502 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.502 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.502 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.502 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:07.502 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.502 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.502 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.502 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:07.502 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:07.760 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.760 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:07.760 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.760 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:07.760 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.760 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:07.760 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.760 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.760 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.018 request: 00:19:08.018 { 00:19:08.018 "name": "nvme0", 00:19:08.018 "trtype": "tcp", 00:19:08.018 "traddr": "10.0.0.2", 00:19:08.018 "adrfam": "ipv4", 00:19:08.018 "trsvcid": "4420", 00:19:08.018 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:08.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:08.018 "prchk_reftag": false, 00:19:08.018 "prchk_guard": false, 00:19:08.018 "hdgst": false, 00:19:08.018 "ddgst": false, 00:19:08.018 "dhchap_key": "key3", 00:19:08.018 "method": "bdev_nvme_attach_controller", 00:19:08.018 "req_id": 1 00:19:08.018 } 00:19:08.018 Got JSON-RPC error response 00:19:08.018 response: 00:19:08.018 { 00:19:08.018 "code": -5, 00:19:08.018 "message": "Input/output error" 00:19:08.018 } 00:19:08.018 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:08.018 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:08.018 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:08.018 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:08.018 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:08.018 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:08.018 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:08.018 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:08.275 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.275 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:08.275 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.275 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:08.275 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.275 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:08.275 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.275 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.275 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.531 request: 00:19:08.531 { 00:19:08.531 "name": "nvme0", 00:19:08.531 "trtype": "tcp", 00:19:08.531 "traddr": "10.0.0.2", 00:19:08.532 "adrfam": "ipv4", 00:19:08.532 "trsvcid": "4420", 00:19:08.532 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:08.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:08.532 "prchk_reftag": false, 00:19:08.532 "prchk_guard": false, 00:19:08.532 "hdgst": false, 00:19:08.532 "ddgst": false, 00:19:08.532 "dhchap_key": "key3", 00:19:08.532 "method": "bdev_nvme_attach_controller", 00:19:08.532 "req_id": 1 00:19:08.532 } 00:19:08.532 Got JSON-RPC error response 00:19:08.532 response: 00:19:08.532 { 00:19:08.532 "code": -5, 00:19:08.532 "message": "Input/output error" 00:19:08.532 } 00:19:08.532 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:08.532 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:08.532 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:08.532 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:08.532 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:08.532 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:08.532 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:08.532 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:08.532 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:08.532 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:08.789 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:09.047 request: 00:19:09.047 { 00:19:09.047 "name": "nvme0", 00:19:09.047 "trtype": "tcp", 00:19:09.047 "traddr": "10.0.0.2", 00:19:09.047 "adrfam": "ipv4", 00:19:09.047 "trsvcid": "4420", 00:19:09.047 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:09.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:09.047 "prchk_reftag": false, 00:19:09.047 "prchk_guard": false, 00:19:09.047 "hdgst": false, 00:19:09.047 "ddgst": false, 00:19:09.047 "dhchap_key": "key0", 00:19:09.047 "dhchap_ctrlr_key": "key1", 00:19:09.047 "method": "bdev_nvme_attach_controller", 00:19:09.047 "req_id": 1 00:19:09.047 } 00:19:09.047 Got JSON-RPC error response 00:19:09.047 response: 00:19:09.047 { 00:19:09.047 "code": -5, 00:19:09.047 "message": "Input/output error" 00:19:09.047 } 00:19:09.047 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:09.047 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:09.047 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:09.047 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:09.047 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:09.047 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:09.612 00:19:09.612 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:09.612 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:09.612 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.612 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.612 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.612 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.870 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:09.870 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:09.870 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1147823 00:19:09.870 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1147823 ']' 00:19:09.870 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1147823 00:19:09.870 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:09.870 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.870 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1147823 00:19:09.870 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:09.870 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:09.870 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1147823' 00:19:09.870 killing process with pid 1147823 00:19:09.870 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1147823 00:19:09.871 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1147823 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:10.434 rmmod nvme_tcp 00:19:10.434 rmmod nvme_fabrics 00:19:10.434 rmmod nvme_keyring 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1170508 ']' 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1170508 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1170508 ']' 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1170508 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1170508 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1170508' 00:19:10.434 killing process with pid 1170508 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1170508 00:19:10.434 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1170508 00:19:10.692 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:10.692 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:10.692 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:10.692 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:10.692 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:10.692 16:00:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.692 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.692 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.218 16:00:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:13.218 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.z2e /tmp/spdk.key-sha256.SIv /tmp/spdk.key-sha384.X9A /tmp/spdk.key-sha512.jD6 /tmp/spdk.key-sha512.qRf /tmp/spdk.key-sha384.KmP /tmp/spdk.key-sha256.2ur '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:13.218 00:19:13.218 real 3m11.368s 00:19:13.218 user 7m24.879s 00:19:13.218 sys 0m25.447s 00:19:13.218 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:13.218 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.218 ************************************ 00:19:13.218 END TEST nvmf_auth_target 00:19:13.218 ************************************ 00:19:13.218 16:00:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:13.218 16:00:39 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:13.218 16:00:39 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:13.218 16:00:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:13.218 16:00:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.218 16:00:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:13.218 ************************************ 00:19:13.218 START TEST nvmf_bdevio_no_huge 00:19:13.218 ************************************ 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:13.218 * Looking for test storage... 00:19:13.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:13.218 16:00:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:15.116 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:15.116 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:15.116 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:15.116 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:15.116 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:15.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:19:15.117 00:19:15.117 --- 10.0.0.2 ping statistics --- 00:19:15.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.117 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:15.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:19:15.117 00:19:15.117 --- 10.0.0.1 ping statistics --- 00:19:15.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.117 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1173274 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1173274 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1173274 ']' 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:15.117 16:00:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:15.117 [2024-07-15 16:00:41.855219] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:15.117 [2024-07-15 16:00:41.855307] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:15.117 [2024-07-15 16:00:41.936758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:15.375 [2024-07-15 16:00:42.060640] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.375 [2024-07-15 16:00:42.060696] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.375 [2024-07-15 16:00:42.060713] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.375 [2024-07-15 16:00:42.060726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.375 [2024-07-15 16:00:42.060737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.375 [2024-07-15 16:00:42.060854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:15.375 [2024-07-15 16:00:42.060937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:15.375 [2024-07-15 16:00:42.060989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:15.375 [2024-07-15 16:00:42.060992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:15.939 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.939 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:19:15.939 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:15.939 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:15.939 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:15.939 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.939 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:15.939 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.939 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:15.939 [2024-07-15 16:00:42.868103] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:16.197 Malloc0 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:16.197 [2024-07-15 16:00:42.906748] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.197 { 00:19:16.197 "params": { 00:19:16.197 "name": "Nvme$subsystem", 00:19:16.197 "trtype": "$TEST_TRANSPORT", 00:19:16.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.197 "adrfam": "ipv4", 00:19:16.197 "trsvcid": "$NVMF_PORT", 00:19:16.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.197 "hdgst": ${hdgst:-false}, 00:19:16.197 "ddgst": ${ddgst:-false} 00:19:16.197 }, 00:19:16.197 "method": "bdev_nvme_attach_controller" 00:19:16.197 } 00:19:16.197 EOF 00:19:16.197 )") 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:16.197 16:00:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:16.197 "params": { 00:19:16.197 "name": "Nvme1", 00:19:16.197 "trtype": "tcp", 00:19:16.197 "traddr": "10.0.0.2", 00:19:16.197 "adrfam": "ipv4", 00:19:16.197 "trsvcid": "4420", 00:19:16.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.197 "hdgst": false, 00:19:16.197 "ddgst": false 00:19:16.197 }, 00:19:16.197 "method": "bdev_nvme_attach_controller" 00:19:16.197 }' 00:19:16.197 [2024-07-15 16:00:42.954554] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:16.197 [2024-07-15 16:00:42.954640] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1173426 ] 00:19:16.197 [2024-07-15 16:00:43.019539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:16.468 [2024-07-15 16:00:43.131655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.468 [2024-07-15 16:00:43.131702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.468 [2024-07-15 16:00:43.131705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.468 I/O targets: 00:19:16.468 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:16.468 00:19:16.468 00:19:16.468 CUnit - A unit testing framework for C - Version 2.1-3 00:19:16.468 http://cunit.sourceforge.net/ 00:19:16.468 00:19:16.468 00:19:16.468 Suite: bdevio tests on: Nvme1n1 00:19:16.468 Test: blockdev write read block ...passed 00:19:16.468 Test: blockdev write zeroes read block ...passed 00:19:16.468 Test: blockdev write zeroes read no split ...passed 00:19:16.749 Test: blockdev write zeroes read split ...passed 00:19:16.749 Test: blockdev write zeroes read split partial ...passed 00:19:16.749 Test: blockdev reset ...[2024-07-15 16:00:43.508368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:16.749 [2024-07-15 16:00:43.508477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1fb0 (9): Bad file descriptor 00:19:16.749 [2024-07-15 16:00:43.560707] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:16.749 passed 00:19:16.749 Test: blockdev write read 8 blocks ...passed 00:19:16.749 Test: blockdev write read size > 128k ...passed 00:19:16.749 Test: blockdev write read invalid size ...passed 00:19:16.749 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:16.749 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:16.749 Test: blockdev write read max offset ...passed 00:19:17.020 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:17.020 Test: blockdev writev readv 8 blocks ...passed 00:19:17.020 Test: blockdev writev readv 30 x 1block ...passed 00:19:17.020 Test: blockdev writev readv block ...passed 00:19:17.020 Test: blockdev writev readv size > 128k ...passed 00:19:17.020 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:17.020 Test: blockdev comparev and writev ...[2024-07-15 16:00:43.817425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:17.020 [2024-07-15 16:00:43.817461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:17.020 [2024-07-15 16:00:43.817485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:17.020 [2024-07-15 16:00:43.817502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.020 [2024-07-15 16:00:43.817908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:17.020 [2024-07-15 16:00:43.817933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:17.020 [2024-07-15 16:00:43.817955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:17.020 [2024-07-15 16:00:43.817971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:17.020 [2024-07-15 16:00:43.818346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:17.020 [2024-07-15 16:00:43.818369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:17.020 [2024-07-15 16:00:43.818389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:17.020 [2024-07-15 16:00:43.818405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:17.020 [2024-07-15 16:00:43.818793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:17.020 [2024-07-15 16:00:43.818827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:17.020 [2024-07-15 16:00:43.818848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:17.020 [2024-07-15 16:00:43.818863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:17.020 passed 00:19:17.020 Test: blockdev nvme passthru rw ...passed 00:19:17.020 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:00:43.901258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:17.020 [2024-07-15 16:00:43.901285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:17.020 [2024-07-15 16:00:43.901465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:17.020 [2024-07-15 16:00:43.901488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:17.020 [2024-07-15 16:00:43.901666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:17.020 [2024-07-15 16:00:43.901689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:17.020 [2024-07-15 16:00:43.901885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:17.020 [2024-07-15 16:00:43.901909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:17.020 passed 00:19:17.020 Test: blockdev nvme admin passthru ...passed 00:19:17.278 Test: blockdev copy ...passed 00:19:17.278 00:19:17.278 Run Summary: Type Total Ran Passed Failed Inactive 00:19:17.278 suites 1 1 n/a 0 0 00:19:17.278 tests 23 23 23 0 0 00:19:17.278 asserts 152 152 152 0 n/a 00:19:17.278 00:19:17.278 Elapsed time = 1.338 seconds 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:17.535 rmmod nvme_tcp 00:19:17.535 rmmod nvme_fabrics 00:19:17.535 rmmod nvme_keyring 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1173274 ']' 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1173274 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1173274 ']' 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1173274 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1173274 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1173274' 00:19:17.535 killing process with pid 1173274 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1173274 00:19:17.535 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1173274 00:19:18.099 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:18.099 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:18.099 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:18.099 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:18.099 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:18.099 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.099 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.099 16:00:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.000 16:00:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:20.000 00:19:20.000 real 0m7.197s 00:19:20.000 user 0m13.908s 00:19:20.000 sys 0m2.463s 00:19:20.000 16:00:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:20.000 16:00:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.000 ************************************ 00:19:20.000 END TEST nvmf_bdevio_no_huge 00:19:20.000 ************************************ 00:19:20.000 16:00:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:20.000 16:00:46 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:20.000 16:00:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:20.000 16:00:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.000 16:00:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:20.259 ************************************ 00:19:20.259 START TEST nvmf_tls 00:19:20.259 ************************************ 00:19:20.259 16:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:20.259 * Looking for test storage... 00:19:20.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.259 16:00:46 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.259 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:20.259 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.259 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.259 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.259 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.259 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.260 16:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:20.260 16:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:22.159 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:22.159 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:22.159 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:22.159 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.159 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:22.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:19:22.160 00:19:22.160 --- 10.0.0.2 ping statistics --- 00:19:22.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.160 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:19:22.160 00:19:22.160 --- 10.0.0.1 ping statistics --- 00:19:22.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.160 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1175496 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1175496 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1175496 ']' 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.160 16:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.160 [2024-07-15 16:00:49.046114] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:22.160 [2024-07-15 16:00:49.046206] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.160 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.417 [2024-07-15 16:00:49.116819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.417 [2024-07-15 16:00:49.231577] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.417 [2024-07-15 16:00:49.231642] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.417 [2024-07-15 16:00:49.231658] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.417 [2024-07-15 16:00:49.231671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.417 [2024-07-15 16:00:49.231682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.417 [2024-07-15 16:00:49.231723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.350 16:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.350 16:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:23.350 16:00:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.350 16:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:23.350 16:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.350 16:00:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.350 16:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:23.350 16:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:23.350 true 00:19:23.607 16:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:23.607 16:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:23.864 16:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:23.865 16:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:23.865 16:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:24.122 16:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:24.122 16:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:24.381 16:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:24.381 16:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:24.381 16:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:24.639 16:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:24.639 16:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:24.896 16:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:24.896 16:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:24.896 16:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:24.896 16:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:25.153 16:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:25.153 16:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:25.153 16:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:25.411 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.411 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:25.669 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:25.669 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:25.669 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:25.928 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.928 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:26.185 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:26.185 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:26.185 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:26.185 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.QvugcHMDQB 00:19:26.185 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:26.185 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.oHFpGxsIsF 00:19:26.185 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:26.185 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:26.185 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.QvugcHMDQB 00:19:26.185 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.oHFpGxsIsF 00:19:26.185 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:26.441 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:27.005 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.QvugcHMDQB 00:19:27.005 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QvugcHMDQB 00:19:27.005 16:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:27.262 [2024-07-15 16:00:54.005833] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.262 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:27.519 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:27.777 [2024-07-15 16:00:54.515206] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.777 [2024-07-15 16:00:54.515432] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.777 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:28.035 malloc0 00:19:28.035 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:28.292 16:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QvugcHMDQB 00:19:28.550 [2024-07-15 16:00:55.244372] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:28.550 16:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.QvugcHMDQB 00:19:28.550 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.560 Initializing NVMe Controllers 00:19:38.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:38.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:38.560 Initialization complete. Launching workers. 00:19:38.560 ======================================================== 00:19:38.560 Latency(us) 00:19:38.560 Device Information : IOPS MiB/s Average min max 00:19:38.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7791.13 30.43 8217.20 1211.53 9159.19 00:19:38.560 ======================================================== 00:19:38.560 Total : 7791.13 30.43 8217.20 1211.53 9159.19 00:19:38.560 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QvugcHMDQB 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QvugcHMDQB' 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1177438 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1177438 /var/tmp/bdevperf.sock 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1177438 ']' 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.560 16:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.560 [2024-07-15 16:01:05.416273] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:38.560 [2024-07-15 16:01:05.416349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177438 ] 00:19:38.560 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.560 [2024-07-15 16:01:05.474325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.819 [2024-07-15 16:01:05.580528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.819 16:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.819 16:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:38.819 16:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QvugcHMDQB 00:19:39.077 [2024-07-15 16:01:05.921378] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.077 [2024-07-15 16:01:05.921492] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:39.077 TLSTESTn1 00:19:39.335 16:01:06 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:39.335 Running I/O for 10 seconds... 00:19:49.296 00:19:49.296 Latency(us) 00:19:49.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.296 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:49.296 Verification LBA range: start 0x0 length 0x2000 00:19:49.297 TLSTESTn1 : 10.04 1693.48 6.62 0.00 0.00 75434.64 11553.75 85827.89 00:19:49.297 =================================================================================================================== 00:19:49.297 Total : 1693.48 6.62 0.00 0.00 75434.64 11553.75 85827.89 00:19:49.297 0 00:19:49.297 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:49.297 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1177438 00:19:49.297 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1177438 ']' 00:19:49.297 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1177438 00:19:49.297 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:49.297 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.297 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1177438 00:19:49.297 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:49.297 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:49.297 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1177438' 00:19:49.297 killing process with pid 1177438 00:19:49.297 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1177438 00:19:49.297 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.297 00:19:49.297 Latency(us) 00:19:49.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.297 =================================================================================================================== 00:19:49.297 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.297 [2024-07-15 16:01:16.210095] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:49.297 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1177438 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oHFpGxsIsF 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oHFpGxsIsF 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oHFpGxsIsF 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.oHFpGxsIsF' 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1178717 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1178717 /var/tmp/bdevperf.sock 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1178717 ']' 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.555 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.813 [2024-07-15 16:01:16.525334] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:49.813 [2024-07-15 16:01:16.525414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178717 ] 00:19:49.813 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.813 [2024-07-15 16:01:16.584457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.813 [2024-07-15 16:01:16.686366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.071 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.071 16:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:50.072 16:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oHFpGxsIsF 00:19:50.329 [2024-07-15 16:01:17.063451] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.329 [2024-07-15 16:01:17.063567] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:50.329 [2024-07-15 16:01:17.069069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:50.329 [2024-07-15 16:01:17.069550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79ef90 (107): Transport endpoint is not connected 00:19:50.329 [2024-07-15 16:01:17.070538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79ef90 (9): Bad file descriptor 00:19:50.329 [2024-07-15 16:01:17.071537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:50.329 [2024-07-15 16:01:17.071558] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:50.329 [2024-07-15 16:01:17.071590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:50.329 request: 00:19:50.329 { 00:19:50.329 "name": "TLSTEST", 00:19:50.329 "trtype": "tcp", 00:19:50.329 "traddr": "10.0.0.2", 00:19:50.329 "adrfam": "ipv4", 00:19:50.329 "trsvcid": "4420", 00:19:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.329 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.329 "prchk_reftag": false, 00:19:50.329 "prchk_guard": false, 00:19:50.329 "hdgst": false, 00:19:50.329 "ddgst": false, 00:19:50.329 "psk": "/tmp/tmp.oHFpGxsIsF", 00:19:50.329 "method": "bdev_nvme_attach_controller", 00:19:50.329 "req_id": 1 00:19:50.329 } 00:19:50.329 Got JSON-RPC error response 00:19:50.329 response: 00:19:50.329 { 00:19:50.329 "code": -5, 00:19:50.329 "message": "Input/output error" 00:19:50.329 } 00:19:50.329 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1178717 00:19:50.329 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1178717 ']' 00:19:50.329 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1178717 00:19:50.329 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:50.329 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.329 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1178717 00:19:50.329 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:50.329 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:50.329 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1178717' 00:19:50.329 killing process with pid 1178717 00:19:50.329 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1178717 00:19:50.329 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.329 00:19:50.329 Latency(us) 00:19:50.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.329 =================================================================================================================== 00:19:50.329 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:50.329 [2024-07-15 16:01:17.123544] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:50.329 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1178717 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QvugcHMDQB 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QvugcHMDQB 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QvugcHMDQB 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QvugcHMDQB' 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1178858 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1178858 /var/tmp/bdevperf.sock 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1178858 ']' 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.587 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.587 [2024-07-15 16:01:17.430588] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:50.587 [2024-07-15 16:01:17.430667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178858 ] 00:19:50.587 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.587 [2024-07-15 16:01:17.488073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.845 [2024-07-15 16:01:17.596474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.845 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.845 16:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:50.845 16:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.QvugcHMDQB 00:19:51.103 [2024-07-15 16:01:17.982495] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.103 [2024-07-15 16:01:17.982633] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:51.103 [2024-07-15 16:01:17.990142] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:51.103 [2024-07-15 16:01:17.990176] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:51.103 [2024-07-15 16:01:17.990229] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:51.103 [2024-07-15 16:01:17.990522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbcf90 (107): Transport endpoint is not connected 00:19:51.103 [2024-07-15 16:01:17.991510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbcf90 (9): Bad file descriptor 00:19:51.103 [2024-07-15 16:01:17.992510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:51.103 [2024-07-15 16:01:17.992529] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:51.103 [2024-07-15 16:01:17.992561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:51.103 request: 00:19:51.103 { 00:19:51.103 "name": "TLSTEST", 00:19:51.103 "trtype": "tcp", 00:19:51.103 "traddr": "10.0.0.2", 00:19:51.103 "adrfam": "ipv4", 00:19:51.103 "trsvcid": "4420", 00:19:51.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.103 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:51.103 "prchk_reftag": false, 00:19:51.103 "prchk_guard": false, 00:19:51.103 "hdgst": false, 00:19:51.103 "ddgst": false, 00:19:51.103 "psk": "/tmp/tmp.QvugcHMDQB", 00:19:51.103 "method": "bdev_nvme_attach_controller", 00:19:51.103 "req_id": 1 00:19:51.103 } 00:19:51.103 Got JSON-RPC error response 00:19:51.103 response: 00:19:51.103 { 00:19:51.103 "code": -5, 00:19:51.103 "message": "Input/output error" 00:19:51.103 } 00:19:51.103 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1178858 00:19:51.103 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1178858 ']' 00:19:51.103 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1178858 00:19:51.103 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:51.103 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.103 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1178858 00:19:51.360 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:51.360 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:51.360 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1178858' 00:19:51.360 killing process with pid 1178858 00:19:51.360 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1178858 00:19:51.360 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.360 00:19:51.360 Latency(us) 00:19:51.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.360 =================================================================================================================== 00:19:51.360 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.360 [2024-07-15 16:01:18.041277] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:51.360 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1178858 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QvugcHMDQB 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QvugcHMDQB 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QvugcHMDQB 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QvugcHMDQB' 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1178993 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1178993 /var/tmp/bdevperf.sock 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1178993 ']' 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.619 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.619 [2024-07-15 16:01:18.345142] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:51.619 [2024-07-15 16:01:18.345259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178993 ] 00:19:51.619 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.619 [2024-07-15 16:01:18.403499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.619 [2024-07-15 16:01:18.506629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.877 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.877 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:51.877 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QvugcHMDQB 00:19:52.135 [2024-07-15 16:01:18.892405] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.135 [2024-07-15 16:01:18.892532] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:52.135 [2024-07-15 16:01:18.902490] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:52.135 [2024-07-15 16:01:18.902520] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:52.135 [2024-07-15 16:01:18.902573] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:52.135 [2024-07-15 16:01:18.903433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64df90 (107): Transport endpoint is not connected 00:19:52.136 [2024-07-15 16:01:18.904424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64df90 (9): Bad file descriptor 00:19:52.136 [2024-07-15 16:01:18.905424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:52.136 [2024-07-15 16:01:18.905442] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:52.136 [2024-07-15 16:01:18.905473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:52.136 request: 00:19:52.136 { 00:19:52.136 "name": "TLSTEST", 00:19:52.136 "trtype": "tcp", 00:19:52.136 "traddr": "10.0.0.2", 00:19:52.136 "adrfam": "ipv4", 00:19:52.136 "trsvcid": "4420", 00:19:52.136 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:52.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.136 "prchk_reftag": false, 00:19:52.136 "prchk_guard": false, 00:19:52.136 "hdgst": false, 00:19:52.136 "ddgst": false, 00:19:52.136 "psk": "/tmp/tmp.QvugcHMDQB", 00:19:52.136 "method": "bdev_nvme_attach_controller", 00:19:52.136 "req_id": 1 00:19:52.136 } 00:19:52.136 Got JSON-RPC error response 00:19:52.136 response: 00:19:52.136 { 00:19:52.136 "code": -5, 00:19:52.136 "message": "Input/output error" 00:19:52.136 } 00:19:52.136 16:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1178993 00:19:52.136 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1178993 ']' 00:19:52.136 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1178993 00:19:52.136 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:52.136 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.136 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1178993 00:19:52.136 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:52.136 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:52.136 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1178993' 00:19:52.136 killing process with pid 1178993 00:19:52.136 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1178993 00:19:52.136 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.136 00:19:52.136 Latency(us) 00:19:52.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.136 =================================================================================================================== 00:19:52.136 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:52.136 [2024-07-15 16:01:18.946885] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:52.136 16:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1178993 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:52.394 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1179129 00:19:52.395 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:52.395 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.395 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1179129 /var/tmp/bdevperf.sock 00:19:52.395 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1179129 ']' 00:19:52.395 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.395 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.395 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.395 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.395 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.395 [2024-07-15 16:01:19.224019] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:52.395 [2024-07-15 16:01:19.224107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179129 ] 00:19:52.395 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.395 [2024-07-15 16:01:19.282331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.653 [2024-07-15 16:01:19.391915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.653 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.653 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:52.653 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:52.911 [2024-07-15 16:01:19.724740] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:52.911 [2024-07-15 16:01:19.726332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa2770 (9): Bad file descriptor 00:19:52.911 [2024-07-15 16:01:19.727329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:52.911 [2024-07-15 16:01:19.727348] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:52.911 [2024-07-15 16:01:19.727380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:52.911 request: 00:19:52.911 { 00:19:52.911 "name": "TLSTEST", 00:19:52.911 "trtype": "tcp", 00:19:52.911 "traddr": "10.0.0.2", 00:19:52.911 "adrfam": "ipv4", 00:19:52.911 "trsvcid": "4420", 00:19:52.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.911 "prchk_reftag": false, 00:19:52.911 "prchk_guard": false, 00:19:52.911 "hdgst": false, 00:19:52.911 "ddgst": false, 00:19:52.911 "method": "bdev_nvme_attach_controller", 00:19:52.911 "req_id": 1 00:19:52.911 } 00:19:52.911 Got JSON-RPC error response 00:19:52.911 response: 00:19:52.911 { 00:19:52.911 "code": -5, 00:19:52.911 "message": "Input/output error" 00:19:52.911 } 00:19:52.911 16:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1179129 00:19:52.911 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1179129 ']' 00:19:52.911 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1179129 00:19:52.911 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:52.911 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.911 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1179129 00:19:52.911 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:52.911 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:52.911 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1179129' 00:19:52.911 killing process with pid 1179129 00:19:52.911 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1179129 00:19:52.911 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.911 00:19:52.911 Latency(us) 00:19:52.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.911 =================================================================================================================== 00:19:52.911 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:52.911 16:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1179129 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1175496 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1175496 ']' 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1175496 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1175496 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1175496' 00:19:53.169 killing process with pid 1175496 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1175496 00:19:53.169 [2024-07-15 16:01:20.069336] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:53.169 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1175496 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.m1FmyOkO2m 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.m1FmyOkO2m 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1179278 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1179278 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1179278 ']' 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.736 16:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.736 [2024-07-15 16:01:20.472247] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:53.736 [2024-07-15 16:01:20.472331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.736 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.736 [2024-07-15 16:01:20.541016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.736 [2024-07-15 16:01:20.658405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.736 [2024-07-15 16:01:20.658464] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.736 [2024-07-15 16:01:20.658480] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.736 [2024-07-15 16:01:20.658493] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.736 [2024-07-15 16:01:20.658505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.736 [2024-07-15 16:01:20.658543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.670 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.670 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:54.670 16:01:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:54.670 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.670 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.670 16:01:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.670 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.m1FmyOkO2m 00:19:54.670 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.m1FmyOkO2m 00:19:54.670 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:54.927 [2024-07-15 16:01:21.658277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.927 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:55.183 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:55.440 [2024-07-15 16:01:22.143583] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.440 [2024-07-15 16:01:22.143818] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.440 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:55.697 malloc0 00:19:55.697 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:55.954 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m1FmyOkO2m 00:19:56.211 [2024-07-15 16:01:22.953460] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m1FmyOkO2m 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.m1FmyOkO2m' 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1179575 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1179575 /var/tmp/bdevperf.sock 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1179575 ']' 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.211 16:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.211 [2024-07-15 16:01:23.018720] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:56.211 [2024-07-15 16:01:23.018793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179575 ] 00:19:56.211 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.211 [2024-07-15 16:01:23.075141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.468 [2024-07-15 16:01:23.182376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.468 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.468 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:56.468 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m1FmyOkO2m 00:19:56.723 [2024-07-15 16:01:23.563315] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.723 [2024-07-15 16:01:23.563445] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:56.723 TLSTESTn1 00:19:56.723 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:56.980 Running I/O for 10 seconds... 00:20:07.015 00:20:07.015 Latency(us) 00:20:07.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.015 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:07.015 Verification LBA range: start 0x0 length 0x2000 00:20:07.015 TLSTESTn1 : 10.04 2512.60 9.81 0.00 0.00 50817.07 6844.87 85827.89 00:20:07.015 =================================================================================================================== 00:20:07.015 Total : 2512.60 9.81 0.00 0.00 50817.07 6844.87 85827.89 00:20:07.015 0 00:20:07.015 16:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:07.015 16:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1179575 00:20:07.015 16:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1179575 ']' 00:20:07.015 16:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1179575 00:20:07.015 16:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:07.015 16:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.015 16:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1179575 00:20:07.015 16:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:07.015 16:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:07.015 16:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1179575' 00:20:07.015 killing process with pid 1179575 00:20:07.015 16:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1179575 00:20:07.015 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.015 00:20:07.015 Latency(us) 00:20:07.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.015 =================================================================================================================== 00:20:07.015 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:07.015 [2024-07-15 16:01:33.885059] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:07.015 16:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1179575 00:20:07.273 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.m1FmyOkO2m 00:20:07.273 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m1FmyOkO2m 00:20:07.273 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:07.273 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m1FmyOkO2m 00:20:07.273 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:07.273 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:07.273 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:07.273 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:07.273 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m1FmyOkO2m 00:20:07.273 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.m1FmyOkO2m' 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1180891 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1180891 /var/tmp/bdevperf.sock 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1180891 ']' 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.274 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.274 [2024-07-15 16:01:34.197004] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:07.274 [2024-07-15 16:01:34.197083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180891 ] 00:20:07.532 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.532 [2024-07-15 16:01:34.256448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.532 [2024-07-15 16:01:34.364529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.790 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.790 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:07.790 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m1FmyOkO2m 00:20:07.790 [2024-07-15 16:01:34.697510] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.790 [2024-07-15 16:01:34.697598] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:07.790 [2024-07-15 16:01:34.697613] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.m1FmyOkO2m 00:20:07.790 request: 00:20:07.790 { 00:20:07.790 "name": "TLSTEST", 00:20:07.790 "trtype": "tcp", 00:20:07.790 "traddr": "10.0.0.2", 00:20:07.790 "adrfam": "ipv4", 00:20:07.790 "trsvcid": "4420", 00:20:07.790 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.790 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.790 "prchk_reftag": false, 00:20:07.790 "prchk_guard": false, 00:20:07.790 "hdgst": false, 00:20:07.790 "ddgst": false, 00:20:07.790 "psk": "/tmp/tmp.m1FmyOkO2m", 00:20:07.790 "method": "bdev_nvme_attach_controller", 00:20:07.790 "req_id": 1 00:20:07.790 } 00:20:07.790 Got JSON-RPC error response 00:20:07.790 response: 00:20:07.790 { 00:20:07.790 "code": -1, 00:20:07.790 "message": "Operation not permitted" 00:20:07.790 } 00:20:07.790 16:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1180891 00:20:07.790 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1180891 ']' 00:20:07.790 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1180891 00:20:07.790 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:08.048 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:08.048 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1180891 00:20:08.048 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:08.048 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:08.048 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1180891' 00:20:08.048 killing process with pid 1180891 00:20:08.048 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1180891 00:20:08.048 Received shutdown signal, test time was about 10.000000 seconds 00:20:08.048 00:20:08.048 Latency(us) 00:20:08.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.048 =================================================================================================================== 00:20:08.048 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:08.048 16:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1180891 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1179278 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1179278 ']' 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1179278 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1179278 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1179278' 00:20:08.306 killing process with pid 1179278 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1179278 00:20:08.306 [2024-07-15 16:01:35.034416] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:08.306 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1179278 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1181040 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1181040 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1181040 ']' 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.564 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.564 [2024-07-15 16:01:35.381398] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:08.564 [2024-07-15 16:01:35.381490] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.564 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.564 [2024-07-15 16:01:35.450299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.822 [2024-07-15 16:01:35.564281] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.822 [2024-07-15 16:01:35.564343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.822 [2024-07-15 16:01:35.564359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.822 [2024-07-15 16:01:35.564372] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.822 [2024-07-15 16:01:35.564384] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.822 [2024-07-15 16:01:35.564422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.822 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.822 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:08.822 16:01:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.822 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.822 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.822 16:01:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.823 16:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.m1FmyOkO2m 00:20:08.823 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:08.823 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.m1FmyOkO2m 00:20:08.823 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:08.823 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:08.823 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:08.823 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:08.823 16:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.m1FmyOkO2m 00:20:08.823 16:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.m1FmyOkO2m 00:20:08.823 16:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.079 [2024-07-15 16:01:35.921299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.079 16:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:09.337 16:01:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:09.595 [2024-07-15 16:01:36.414621] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.595 [2024-07-15 16:01:36.414850] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.595 16:01:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:09.853 malloc0 00:20:09.853 16:01:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.112 16:01:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m1FmyOkO2m 00:20:10.379 [2024-07-15 16:01:37.152726] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:10.379 [2024-07-15 16:01:37.152768] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:10.379 [2024-07-15 16:01:37.152819] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:10.379 request: 00:20:10.379 { 00:20:10.379 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.379 "host": "nqn.2016-06.io.spdk:host1", 00:20:10.379 "psk": "/tmp/tmp.m1FmyOkO2m", 00:20:10.379 "method": "nvmf_subsystem_add_host", 00:20:10.379 "req_id": 1 00:20:10.379 } 00:20:10.379 Got JSON-RPC error response 00:20:10.379 response: 00:20:10.379 { 00:20:10.379 "code": -32603, 00:20:10.379 "message": "Internal error" 00:20:10.379 } 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1181040 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1181040 ']' 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1181040 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1181040 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1181040' 00:20:10.379 killing process with pid 1181040 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1181040 00:20:10.379 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1181040 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.m1FmyOkO2m 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1181334 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1181334 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1181334 ']' 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.637 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.637 [2024-07-15 16:01:37.534887] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:10.637 [2024-07-15 16:01:37.534982] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.637 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.896 [2024-07-15 16:01:37.597733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.896 [2024-07-15 16:01:37.706803] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.896 [2024-07-15 16:01:37.706868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.896 [2024-07-15 16:01:37.706894] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.896 [2024-07-15 16:01:37.706910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.896 [2024-07-15 16:01:37.706922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.896 [2024-07-15 16:01:37.706963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.153 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.153 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:11.153 16:01:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.153 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.153 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.153 16:01:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.153 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.m1FmyOkO2m 00:20:11.153 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.m1FmyOkO2m 00:20:11.153 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:11.153 [2024-07-15 16:01:38.077264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.409 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:11.665 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:11.922 [2024-07-15 16:01:38.658809] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:11.922 [2024-07-15 16:01:38.659054] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.922 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:12.212 malloc0 00:20:12.212 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:12.496 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m1FmyOkO2m 00:20:12.496 [2024-07-15 16:01:39.411873] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:12.754 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1181506 00:20:12.754 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.754 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.754 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1181506 /var/tmp/bdevperf.sock 00:20:12.754 16:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1181506 ']' 00:20:12.754 16:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.754 16:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.754 16:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.754 16:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.754 16:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.754 [2024-07-15 16:01:39.474941] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:12.754 [2024-07-15 16:01:39.475013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181506 ] 00:20:12.754 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.754 [2024-07-15 16:01:39.531578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.754 [2024-07-15 16:01:39.636205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.012 16:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.012 16:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:13.012 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m1FmyOkO2m 00:20:13.270 [2024-07-15 16:01:40.019770] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.270 [2024-07-15 16:01:40.019906] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:13.270 TLSTESTn1 00:20:13.270 16:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:13.528 16:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:13.528 "subsystems": [ 00:20:13.528 { 00:20:13.528 "subsystem": "keyring", 00:20:13.528 "config": [] 00:20:13.528 }, 00:20:13.528 { 00:20:13.528 "subsystem": "iobuf", 00:20:13.528 "config": [ 00:20:13.528 { 00:20:13.528 "method": "iobuf_set_options", 00:20:13.528 "params": { 00:20:13.528 "small_pool_count": 8192, 00:20:13.528 "large_pool_count": 1024, 00:20:13.528 "small_bufsize": 8192, 00:20:13.528 "large_bufsize": 135168 00:20:13.528 } 00:20:13.528 } 00:20:13.528 ] 00:20:13.528 }, 00:20:13.528 { 00:20:13.528 "subsystem": "sock", 00:20:13.528 "config": [ 00:20:13.528 { 00:20:13.528 "method": "sock_set_default_impl", 00:20:13.528 "params": { 00:20:13.528 "impl_name": "posix" 00:20:13.528 } 00:20:13.528 }, 00:20:13.528 { 00:20:13.528 "method": "sock_impl_set_options", 00:20:13.528 "params": { 00:20:13.528 "impl_name": "ssl", 00:20:13.528 "recv_buf_size": 4096, 00:20:13.528 "send_buf_size": 4096, 00:20:13.528 "enable_recv_pipe": true, 00:20:13.528 "enable_quickack": false, 00:20:13.528 "enable_placement_id": 0, 00:20:13.528 "enable_zerocopy_send_server": true, 00:20:13.528 "enable_zerocopy_send_client": false, 00:20:13.528 "zerocopy_threshold": 0, 00:20:13.528 "tls_version": 0, 00:20:13.528 "enable_ktls": false 00:20:13.528 } 00:20:13.528 }, 00:20:13.528 { 00:20:13.528 "method": "sock_impl_set_options", 00:20:13.528 "params": { 00:20:13.528 "impl_name": "posix", 00:20:13.528 "recv_buf_size": 2097152, 00:20:13.528 "send_buf_size": 2097152, 00:20:13.528 "enable_recv_pipe": true, 00:20:13.528 "enable_quickack": false, 00:20:13.528 "enable_placement_id": 0, 00:20:13.528 "enable_zerocopy_send_server": true, 00:20:13.528 "enable_zerocopy_send_client": false, 00:20:13.528 "zerocopy_threshold": 0, 00:20:13.528 "tls_version": 0, 00:20:13.528 "enable_ktls": false 00:20:13.528 } 00:20:13.528 } 00:20:13.528 ] 00:20:13.528 }, 00:20:13.528 { 00:20:13.528 "subsystem": "vmd", 00:20:13.528 "config": [] 00:20:13.528 }, 00:20:13.528 { 00:20:13.528 "subsystem": "accel", 00:20:13.528 "config": [ 00:20:13.528 { 00:20:13.528 "method": "accel_set_options", 00:20:13.528 "params": { 00:20:13.528 "small_cache_size": 128, 00:20:13.528 "large_cache_size": 16, 00:20:13.528 "task_count": 2048, 00:20:13.528 "sequence_count": 2048, 00:20:13.528 "buf_count": 2048 00:20:13.528 } 00:20:13.528 } 00:20:13.528 ] 00:20:13.528 }, 00:20:13.528 { 00:20:13.528 "subsystem": "bdev", 00:20:13.528 "config": [ 00:20:13.528 { 00:20:13.528 "method": "bdev_set_options", 00:20:13.528 "params": { 00:20:13.528 "bdev_io_pool_size": 65535, 00:20:13.528 "bdev_io_cache_size": 256, 00:20:13.528 "bdev_auto_examine": true, 00:20:13.528 "iobuf_small_cache_size": 128, 00:20:13.528 "iobuf_large_cache_size": 16 00:20:13.528 } 00:20:13.528 }, 00:20:13.528 { 00:20:13.528 "method": "bdev_raid_set_options", 00:20:13.528 "params": { 00:20:13.528 "process_window_size_kb": 1024 00:20:13.528 } 00:20:13.528 }, 00:20:13.528 { 00:20:13.528 "method": "bdev_iscsi_set_options", 00:20:13.528 "params": { 00:20:13.528 "timeout_sec": 30 00:20:13.528 } 00:20:13.528 }, 00:20:13.528 { 00:20:13.528 "method": "bdev_nvme_set_options", 00:20:13.528 "params": { 00:20:13.528 "action_on_timeout": "none", 00:20:13.529 "timeout_us": 0, 00:20:13.529 "timeout_admin_us": 0, 00:20:13.529 "keep_alive_timeout_ms": 10000, 00:20:13.529 "arbitration_burst": 0, 00:20:13.529 "low_priority_weight": 0, 00:20:13.529 "medium_priority_weight": 0, 00:20:13.529 "high_priority_weight": 0, 00:20:13.529 "nvme_adminq_poll_period_us": 10000, 00:20:13.529 "nvme_ioq_poll_period_us": 0, 00:20:13.529 "io_queue_requests": 0, 00:20:13.529 "delay_cmd_submit": true, 00:20:13.529 "transport_retry_count": 4, 00:20:13.529 "bdev_retry_count": 3, 00:20:13.529 "transport_ack_timeout": 0, 00:20:13.529 "ctrlr_loss_timeout_sec": 0, 00:20:13.529 "reconnect_delay_sec": 0, 00:20:13.529 "fast_io_fail_timeout_sec": 0, 00:20:13.529 "disable_auto_failback": false, 00:20:13.529 "generate_uuids": false, 00:20:13.529 "transport_tos": 0, 00:20:13.529 "nvme_error_stat": false, 00:20:13.529 "rdma_srq_size": 0, 00:20:13.529 "io_path_stat": false, 00:20:13.529 "allow_accel_sequence": false, 00:20:13.529 "rdma_max_cq_size": 0, 00:20:13.529 "rdma_cm_event_timeout_ms": 0, 00:20:13.529 "dhchap_digests": [ 00:20:13.529 "sha256", 00:20:13.529 "sha384", 00:20:13.529 "sha512" 00:20:13.529 ], 00:20:13.529 "dhchap_dhgroups": [ 00:20:13.529 "null", 00:20:13.529 "ffdhe2048", 00:20:13.529 "ffdhe3072", 00:20:13.529 "ffdhe4096", 00:20:13.529 "ffdhe6144", 00:20:13.529 "ffdhe8192" 00:20:13.529 ] 00:20:13.529 } 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "method": "bdev_nvme_set_hotplug", 00:20:13.529 "params": { 00:20:13.529 "period_us": 100000, 00:20:13.529 "enable": false 00:20:13.529 } 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "method": "bdev_malloc_create", 00:20:13.529 "params": { 00:20:13.529 "name": "malloc0", 00:20:13.529 "num_blocks": 8192, 00:20:13.529 "block_size": 4096, 00:20:13.529 "physical_block_size": 4096, 00:20:13.529 "uuid": "cc423521-04f2-43ac-bf16-8f277dc32ad3", 00:20:13.529 "optimal_io_boundary": 0 00:20:13.529 } 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "method": "bdev_wait_for_examine" 00:20:13.529 } 00:20:13.529 ] 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "subsystem": "nbd", 00:20:13.529 "config": [] 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "subsystem": "scheduler", 00:20:13.529 "config": [ 00:20:13.529 { 00:20:13.529 "method": "framework_set_scheduler", 00:20:13.529 "params": { 00:20:13.529 "name": "static" 00:20:13.529 } 00:20:13.529 } 00:20:13.529 ] 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "subsystem": "nvmf", 00:20:13.529 "config": [ 00:20:13.529 { 00:20:13.529 "method": "nvmf_set_config", 00:20:13.529 "params": { 00:20:13.529 "discovery_filter": "match_any", 00:20:13.529 "admin_cmd_passthru": { 00:20:13.529 "identify_ctrlr": false 00:20:13.529 } 00:20:13.529 } 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "method": "nvmf_set_max_subsystems", 00:20:13.529 "params": { 00:20:13.529 "max_subsystems": 1024 00:20:13.529 } 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "method": "nvmf_set_crdt", 00:20:13.529 "params": { 00:20:13.529 "crdt1": 0, 00:20:13.529 "crdt2": 0, 00:20:13.529 "crdt3": 0 00:20:13.529 } 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "method": "nvmf_create_transport", 00:20:13.529 "params": { 00:20:13.529 "trtype": "TCP", 00:20:13.529 "max_queue_depth": 128, 00:20:13.529 "max_io_qpairs_per_ctrlr": 127, 00:20:13.529 "in_capsule_data_size": 4096, 00:20:13.529 "max_io_size": 131072, 00:20:13.529 "io_unit_size": 131072, 00:20:13.529 "max_aq_depth": 128, 00:20:13.529 "num_shared_buffers": 511, 00:20:13.529 "buf_cache_size": 4294967295, 00:20:13.529 "dif_insert_or_strip": false, 00:20:13.529 "zcopy": false, 00:20:13.529 "c2h_success": false, 00:20:13.529 "sock_priority": 0, 00:20:13.529 "abort_timeout_sec": 1, 00:20:13.529 "ack_timeout": 0, 00:20:13.529 "data_wr_pool_size": 0 00:20:13.529 } 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "method": "nvmf_create_subsystem", 00:20:13.529 "params": { 00:20:13.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.529 "allow_any_host": false, 00:20:13.529 "serial_number": "SPDK00000000000001", 00:20:13.529 "model_number": "SPDK bdev Controller", 00:20:13.529 "max_namespaces": 10, 00:20:13.529 "min_cntlid": 1, 00:20:13.529 "max_cntlid": 65519, 00:20:13.529 "ana_reporting": false 00:20:13.529 } 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "method": "nvmf_subsystem_add_host", 00:20:13.529 "params": { 00:20:13.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.529 "host": "nqn.2016-06.io.spdk:host1", 00:20:13.529 "psk": "/tmp/tmp.m1FmyOkO2m" 00:20:13.529 } 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "method": "nvmf_subsystem_add_ns", 00:20:13.529 "params": { 00:20:13.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.529 "namespace": { 00:20:13.529 "nsid": 1, 00:20:13.529 "bdev_name": "malloc0", 00:20:13.529 "nguid": "CC42352104F243ACBF168F277DC32AD3", 00:20:13.529 "uuid": "cc423521-04f2-43ac-bf16-8f277dc32ad3", 00:20:13.529 "no_auto_visible": false 00:20:13.529 } 00:20:13.529 } 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "method": "nvmf_subsystem_add_listener", 00:20:13.529 "params": { 00:20:13.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.529 "listen_address": { 00:20:13.529 "trtype": "TCP", 00:20:13.529 "adrfam": "IPv4", 00:20:13.529 "traddr": "10.0.0.2", 00:20:13.529 "trsvcid": "4420" 00:20:13.529 }, 00:20:13.529 "secure_channel": true 00:20:13.529 } 00:20:13.529 } 00:20:13.529 ] 00:20:13.529 } 00:20:13.529 ] 00:20:13.529 }' 00:20:13.529 16:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:14.095 16:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:14.095 "subsystems": [ 00:20:14.095 { 00:20:14.095 "subsystem": "keyring", 00:20:14.095 "config": [] 00:20:14.095 }, 00:20:14.095 { 00:20:14.095 "subsystem": "iobuf", 00:20:14.095 "config": [ 00:20:14.095 { 00:20:14.095 "method": "iobuf_set_options", 00:20:14.095 "params": { 00:20:14.095 "small_pool_count": 8192, 00:20:14.095 "large_pool_count": 1024, 00:20:14.095 "small_bufsize": 8192, 00:20:14.095 "large_bufsize": 135168 00:20:14.095 } 00:20:14.095 } 00:20:14.095 ] 00:20:14.095 }, 00:20:14.095 { 00:20:14.095 "subsystem": "sock", 00:20:14.095 "config": [ 00:20:14.095 { 00:20:14.095 "method": "sock_set_default_impl", 00:20:14.095 "params": { 00:20:14.095 "impl_name": "posix" 00:20:14.095 } 00:20:14.095 }, 00:20:14.095 { 00:20:14.095 "method": "sock_impl_set_options", 00:20:14.095 "params": { 00:20:14.095 "impl_name": "ssl", 00:20:14.095 "recv_buf_size": 4096, 00:20:14.095 "send_buf_size": 4096, 00:20:14.095 "enable_recv_pipe": true, 00:20:14.095 "enable_quickack": false, 00:20:14.095 "enable_placement_id": 0, 00:20:14.095 "enable_zerocopy_send_server": true, 00:20:14.095 "enable_zerocopy_send_client": false, 00:20:14.095 "zerocopy_threshold": 0, 00:20:14.095 "tls_version": 0, 00:20:14.095 "enable_ktls": false 00:20:14.095 } 00:20:14.095 }, 00:20:14.095 { 00:20:14.095 "method": "sock_impl_set_options", 00:20:14.095 "params": { 00:20:14.095 "impl_name": "posix", 00:20:14.095 "recv_buf_size": 2097152, 00:20:14.095 "send_buf_size": 2097152, 00:20:14.095 "enable_recv_pipe": true, 00:20:14.095 "enable_quickack": false, 00:20:14.095 "enable_placement_id": 0, 00:20:14.095 "enable_zerocopy_send_server": true, 00:20:14.095 "enable_zerocopy_send_client": false, 00:20:14.095 "zerocopy_threshold": 0, 00:20:14.095 "tls_version": 0, 00:20:14.095 "enable_ktls": false 00:20:14.095 } 00:20:14.095 } 00:20:14.095 ] 00:20:14.095 }, 00:20:14.095 { 00:20:14.095 "subsystem": "vmd", 00:20:14.095 "config": [] 00:20:14.095 }, 00:20:14.095 { 00:20:14.095 "subsystem": "accel", 00:20:14.095 "config": [ 00:20:14.095 { 00:20:14.095 "method": "accel_set_options", 00:20:14.095 "params": { 00:20:14.095 "small_cache_size": 128, 00:20:14.095 "large_cache_size": 16, 00:20:14.095 "task_count": 2048, 00:20:14.095 "sequence_count": 2048, 00:20:14.095 "buf_count": 2048 00:20:14.095 } 00:20:14.095 } 00:20:14.095 ] 00:20:14.095 }, 00:20:14.095 { 00:20:14.095 "subsystem": "bdev", 00:20:14.095 "config": [ 00:20:14.095 { 00:20:14.095 "method": "bdev_set_options", 00:20:14.095 "params": { 00:20:14.095 "bdev_io_pool_size": 65535, 00:20:14.095 "bdev_io_cache_size": 256, 00:20:14.095 "bdev_auto_examine": true, 00:20:14.095 "iobuf_small_cache_size": 128, 00:20:14.095 "iobuf_large_cache_size": 16 00:20:14.095 } 00:20:14.095 }, 00:20:14.095 { 00:20:14.095 "method": "bdev_raid_set_options", 00:20:14.095 "params": { 00:20:14.095 "process_window_size_kb": 1024 00:20:14.095 } 00:20:14.095 }, 00:20:14.095 { 00:20:14.095 "method": "bdev_iscsi_set_options", 00:20:14.095 "params": { 00:20:14.095 "timeout_sec": 30 00:20:14.095 } 00:20:14.095 }, 00:20:14.095 { 00:20:14.095 "method": "bdev_nvme_set_options", 00:20:14.095 "params": { 00:20:14.095 "action_on_timeout": "none", 00:20:14.095 "timeout_us": 0, 00:20:14.095 "timeout_admin_us": 0, 00:20:14.095 "keep_alive_timeout_ms": 10000, 00:20:14.095 "arbitration_burst": 0, 00:20:14.095 "low_priority_weight": 0, 00:20:14.095 "medium_priority_weight": 0, 00:20:14.095 "high_priority_weight": 0, 00:20:14.095 "nvme_adminq_poll_period_us": 10000, 00:20:14.095 "nvme_ioq_poll_period_us": 0, 00:20:14.095 "io_queue_requests": 512, 00:20:14.095 "delay_cmd_submit": true, 00:20:14.095 "transport_retry_count": 4, 00:20:14.095 "bdev_retry_count": 3, 00:20:14.095 "transport_ack_timeout": 0, 00:20:14.095 "ctrlr_loss_timeout_sec": 0, 00:20:14.095 "reconnect_delay_sec": 0, 00:20:14.095 "fast_io_fail_timeout_sec": 0, 00:20:14.095 "disable_auto_failback": false, 00:20:14.095 "generate_uuids": false, 00:20:14.095 "transport_tos": 0, 00:20:14.095 "nvme_error_stat": false, 00:20:14.095 "rdma_srq_size": 0, 00:20:14.095 "io_path_stat": false, 00:20:14.095 "allow_accel_sequence": false, 00:20:14.095 "rdma_max_cq_size": 0, 00:20:14.095 "rdma_cm_event_timeout_ms": 0, 00:20:14.095 "dhchap_digests": [ 00:20:14.095 "sha256", 00:20:14.095 "sha384", 00:20:14.095 "sha512" 00:20:14.095 ], 00:20:14.095 "dhchap_dhgroups": [ 00:20:14.095 "null", 00:20:14.095 "ffdhe2048", 00:20:14.095 "ffdhe3072", 00:20:14.095 "ffdhe4096", 00:20:14.095 "ffdhe6144", 00:20:14.095 "ffdhe8192" 00:20:14.095 ] 00:20:14.095 } 00:20:14.095 }, 00:20:14.095 { 00:20:14.095 "method": "bdev_nvme_attach_controller", 00:20:14.095 "params": { 00:20:14.095 "name": "TLSTEST", 00:20:14.096 "trtype": "TCP", 00:20:14.096 "adrfam": "IPv4", 00:20:14.096 "traddr": "10.0.0.2", 00:20:14.096 "trsvcid": "4420", 00:20:14.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.096 "prchk_reftag": false, 00:20:14.096 "prchk_guard": false, 00:20:14.096 "ctrlr_loss_timeout_sec": 0, 00:20:14.096 "reconnect_delay_sec": 0, 00:20:14.096 "fast_io_fail_timeout_sec": 0, 00:20:14.096 "psk": "/tmp/tmp.m1FmyOkO2m", 00:20:14.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.096 "hdgst": false, 00:20:14.096 "ddgst": false 00:20:14.096 } 00:20:14.096 }, 00:20:14.096 { 00:20:14.096 "method": "bdev_nvme_set_hotplug", 00:20:14.096 "params": { 00:20:14.096 "period_us": 100000, 00:20:14.096 "enable": false 00:20:14.096 } 00:20:14.096 }, 00:20:14.096 { 00:20:14.096 "method": "bdev_wait_for_examine" 00:20:14.096 } 00:20:14.096 ] 00:20:14.096 }, 00:20:14.096 { 00:20:14.096 "subsystem": "nbd", 00:20:14.096 "config": [] 00:20:14.096 } 00:20:14.096 ] 00:20:14.096 }' 00:20:14.096 16:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1181506 00:20:14.096 16:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1181506 ']' 00:20:14.096 16:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1181506 00:20:14.096 16:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:14.096 16:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:14.096 16:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1181506 00:20:14.096 16:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:14.096 16:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:14.096 16:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1181506' 00:20:14.096 killing process with pid 1181506 00:20:14.096 16:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1181506 00:20:14.096 Received shutdown signal, test time was about 10.000000 seconds 00:20:14.096 00:20:14.096 Latency(us) 00:20:14.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.096 =================================================================================================================== 00:20:14.096 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:14.096 [2024-07-15 16:01:40.762047] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:14.096 16:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1181506 00:20:14.096 16:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1181334 00:20:14.096 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1181334 ']' 00:20:14.096 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1181334 00:20:14.096 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:14.096 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:14.096 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1181334 00:20:14.353 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:14.353 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:14.353 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1181334' 00:20:14.353 killing process with pid 1181334 00:20:14.353 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1181334 00:20:14.353 [2024-07-15 16:01:41.040803] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:14.353 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1181334 00:20:14.611 16:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:14.611 16:01:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.611 16:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:14.611 "subsystems": [ 00:20:14.611 { 00:20:14.611 "subsystem": "keyring", 00:20:14.611 "config": [] 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "subsystem": "iobuf", 00:20:14.611 "config": [ 00:20:14.611 { 00:20:14.611 "method": "iobuf_set_options", 00:20:14.611 "params": { 00:20:14.611 "small_pool_count": 8192, 00:20:14.611 "large_pool_count": 1024, 00:20:14.611 "small_bufsize": 8192, 00:20:14.611 "large_bufsize": 135168 00:20:14.611 } 00:20:14.611 } 00:20:14.611 ] 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "subsystem": "sock", 00:20:14.611 "config": [ 00:20:14.611 { 00:20:14.611 "method": "sock_set_default_impl", 00:20:14.611 "params": { 00:20:14.611 "impl_name": "posix" 00:20:14.611 } 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "method": "sock_impl_set_options", 00:20:14.611 "params": { 00:20:14.611 "impl_name": "ssl", 00:20:14.611 "recv_buf_size": 4096, 00:20:14.611 "send_buf_size": 4096, 00:20:14.611 "enable_recv_pipe": true, 00:20:14.611 "enable_quickack": false, 00:20:14.611 "enable_placement_id": 0, 00:20:14.611 "enable_zerocopy_send_server": true, 00:20:14.611 "enable_zerocopy_send_client": false, 00:20:14.611 "zerocopy_threshold": 0, 00:20:14.611 "tls_version": 0, 00:20:14.611 "enable_ktls": false 00:20:14.611 } 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "method": "sock_impl_set_options", 00:20:14.611 "params": { 00:20:14.611 "impl_name": "posix", 00:20:14.611 "recv_buf_size": 2097152, 00:20:14.611 "send_buf_size": 2097152, 00:20:14.611 "enable_recv_pipe": true, 00:20:14.611 "enable_quickack": false, 00:20:14.611 "enable_placement_id": 0, 00:20:14.611 "enable_zerocopy_send_server": true, 00:20:14.611 "enable_zerocopy_send_client": false, 00:20:14.611 "zerocopy_threshold": 0, 00:20:14.611 "tls_version": 0, 00:20:14.611 "enable_ktls": false 00:20:14.611 } 00:20:14.611 } 00:20:14.611 ] 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "subsystem": "vmd", 00:20:14.611 "config": [] 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "subsystem": "accel", 00:20:14.611 "config": [ 00:20:14.611 { 00:20:14.611 "method": "accel_set_options", 00:20:14.611 "params": { 00:20:14.611 "small_cache_size": 128, 00:20:14.611 "large_cache_size": 16, 00:20:14.611 "task_count": 2048, 00:20:14.611 "sequence_count": 2048, 00:20:14.611 "buf_count": 2048 00:20:14.611 } 00:20:14.611 } 00:20:14.611 ] 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "subsystem": "bdev", 00:20:14.611 "config": [ 00:20:14.611 { 00:20:14.611 "method": "bdev_set_options", 00:20:14.611 "params": { 00:20:14.611 "bdev_io_pool_size": 65535, 00:20:14.611 "bdev_io_cache_size": 256, 00:20:14.611 "bdev_auto_examine": true, 00:20:14.611 "iobuf_small_cache_size": 128, 00:20:14.611 "iobuf_large_cache_size": 16 00:20:14.611 } 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "method": "bdev_raid_set_options", 00:20:14.611 "params": { 00:20:14.611 "process_window_size_kb": 1024 00:20:14.611 } 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "method": "bdev_iscsi_set_options", 00:20:14.611 "params": { 00:20:14.611 "timeout_sec": 30 00:20:14.611 } 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "method": "bdev_nvme_set_options", 00:20:14.611 "params": { 00:20:14.611 "action_on_timeout": "none", 00:20:14.611 "timeout_us": 0, 00:20:14.611 "timeout_admin_us": 0, 00:20:14.611 "keep_alive_timeout_ms": 10000, 00:20:14.611 "arbitration_burst": 0, 00:20:14.611 "low_priority_weight": 0, 00:20:14.611 "medium_priority_weight": 0, 00:20:14.611 "high_priority_weight": 0, 00:20:14.611 "nvme_adminq_poll_period_us": 10000, 00:20:14.611 "nvme_ioq_poll_period_us": 0, 00:20:14.611 "io_queue_requests": 0, 00:20:14.611 "delay_cmd_submit": true, 00:20:14.611 "transport_retry_count": 4, 00:20:14.611 "bdev_retry_count": 3, 00:20:14.611 "transport_ack_timeout": 0, 00:20:14.611 "ctrlr_loss_timeout_sec": 0, 00:20:14.611 "reconnect_delay_sec": 0, 00:20:14.611 "fast_io_fail_timeout_sec": 0, 00:20:14.611 "disable_auto_failback": false, 00:20:14.611 "generate_uuids": false, 00:20:14.611 "transport_tos": 0, 00:20:14.611 "nvme_error_stat": false, 00:20:14.611 "rdma_srq_size": 0, 00:20:14.611 "io_path_stat": false, 00:20:14.611 "allow_accel_sequence": false, 00:20:14.611 "rdma_max_cq_size": 0, 00:20:14.611 "rdma_cm_event_timeout_ms": 0, 00:20:14.611 "dhchap_digests": [ 00:20:14.611 "sha256", 00:20:14.611 "sha384", 00:20:14.611 "sha512" 00:20:14.611 ], 00:20:14.611 "dhchap_dhgroups": [ 00:20:14.611 "null", 00:20:14.611 "ffdhe2048", 00:20:14.611 "ffdhe3072", 00:20:14.611 "ffdhe4096", 00:20:14.611 "ffdhe6144", 00:20:14.611 "ffdhe8192" 00:20:14.611 ] 00:20:14.611 } 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "method": "bdev_nvme_set_hotplug", 00:20:14.611 "params": { 00:20:14.611 "period_us": 100000, 00:20:14.611 "enable": false 00:20:14.611 } 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "method": "bdev_malloc_create", 00:20:14.611 "params": { 00:20:14.611 "name": "malloc0", 00:20:14.611 "num_blocks": 8192, 00:20:14.611 "block_size": 4096, 00:20:14.611 "physical_block_size": 4096, 00:20:14.611 "uuid": "cc423521-04f2-43ac-bf16-8f277dc32ad3", 00:20:14.611 "optimal_io_boundary": 0 00:20:14.611 } 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "method": "bdev_wait_for_examine" 00:20:14.611 } 00:20:14.611 ] 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "subsystem": "nbd", 00:20:14.611 "config": [] 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "subsystem": "scheduler", 00:20:14.611 "config": [ 00:20:14.611 { 00:20:14.611 "method": "framework_set_scheduler", 00:20:14.611 "params": { 00:20:14.611 "name": "static" 00:20:14.611 } 00:20:14.611 } 00:20:14.611 ] 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "subsystem": "nvmf", 00:20:14.611 "config": [ 00:20:14.611 { 00:20:14.611 "method": "nvmf_set_config", 00:20:14.611 "params": { 00:20:14.611 "discovery_filter": "match_any", 00:20:14.611 "admin_cmd_passthru": { 00:20:14.611 "identify_ctrlr": false 00:20:14.611 } 00:20:14.611 } 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "method": "nvmf_set_max_subsystems", 00:20:14.611 "params": { 00:20:14.611 "max_subsystems": 1024 00:20:14.611 } 00:20:14.611 }, 00:20:14.611 { 00:20:14.611 "method": "nvmf_set_crdt", 00:20:14.611 "params": { 00:20:14.611 "crdt1": 0, 00:20:14.611 "crdt2": 0, 00:20:14.611 "crdt3": 0 00:20:14.611 } 00:20:14.611 }, 00:20:14.611 { 00:20:14.612 "method": "nvmf_create_transport", 00:20:14.612 "params": { 00:20:14.612 "trtype": "TCP", 00:20:14.612 "max_queue_depth": 128, 00:20:14.612 "max_io_qpairs_per_ctrlr": 127, 00:20:14.612 "in_capsule_data_size": 4096, 00:20:14.612 "max_io_size": 131072, 00:20:14.612 "io_unit_size": 131072, 00:20:14.612 "max_aq_depth": 128, 00:20:14.612 "num_shared_buffers": 511, 00:20:14.612 "buf_cache_size": 4294967295, 00:20:14.612 "dif_insert_or_strip": false, 00:20:14.612 "zcopy": false, 00:20:14.612 "c2h_success": false, 00:20:14.612 "sock_priority": 0, 00:20:14.612 "abort_timeout_sec": 1, 00:20:14.612 "ack_timeout": 0, 00:20:14.612 "data_wr_pool_size": 0 00:20:14.612 } 00:20:14.612 }, 00:20:14.612 { 00:20:14.612 "method": "nvmf_create_subsystem", 00:20:14.612 "params": { 00:20:14.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.612 "allow_any_host": false, 00:20:14.612 "serial_number": "SPDK00000000000001", 00:20:14.612 "model_number": "SPDK bdev Controller", 00:20:14.612 "max_namespaces": 10, 00:20:14.612 "min_cntlid": 1, 00:20:14.612 "max_cntlid": 65519, 00:20:14.612 "ana_reporting": false 00:20:14.612 } 00:20:14.612 }, 00:20:14.612 { 00:20:14.612 "method": "nvmf_subsystem_add_host", 00:20:14.612 "params": { 00:20:14.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.612 "host": "nqn.2016-06.io.spdk:host1", 00:20:14.612 "psk": "/tmp/tmp.m1FmyOkO2m" 00:20:14.612 } 00:20:14.612 }, 00:20:14.612 { 00:20:14.612 "method": "nvmf_subsystem_add_ns", 00:20:14.612 "params": { 00:20:14.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.612 "namespace": { 00:20:14.612 "nsid": 1, 00:20:14.612 "bdev_name": "malloc0", 00:20:14.612 "nguid": "CC42352104F243ACBF168F277DC32AD3", 00:20:14.612 "uuid": "cc423521-04f2-43ac-bf16-8f277dc32ad3", 00:20:14.612 "no_auto_visible": false 00:20:14.612 } 00:20:14.612 } 00:20:14.612 }, 00:20:14.612 { 00:20:14.612 "method": "nvmf_subsystem_add_listener", 00:20:14.612 "params": { 00:20:14.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.612 "listen_address": { 00:20:14.612 "trtype": "TCP", 00:20:14.612 "adrfam": "IPv4", 00:20:14.612 "traddr": "10.0.0.2", 00:20:14.612 "trsvcid": "4420" 00:20:14.612 }, 00:20:14.612 "secure_channel": true 00:20:14.612 } 00:20:14.612 } 00:20:14.612 ] 00:20:14.612 } 00:20:14.612 ] 00:20:14.612 }' 00:20:14.612 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.612 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.612 16:01:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1181783 00:20:14.612 16:01:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:14.612 16:01:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1181783 00:20:14.612 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1181783 ']' 00:20:14.612 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.612 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.612 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.612 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.612 16:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.612 [2024-07-15 16:01:41.392800] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:14.612 [2024-07-15 16:01:41.392909] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.612 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.612 [2024-07-15 16:01:41.461947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.869 [2024-07-15 16:01:41.582241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.869 [2024-07-15 16:01:41.582311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.869 [2024-07-15 16:01:41.582327] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.869 [2024-07-15 16:01:41.582340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.869 [2024-07-15 16:01:41.582352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.869 [2024-07-15 16:01:41.582444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.126 [2024-07-15 16:01:41.822896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.126 [2024-07-15 16:01:41.838829] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:15.126 [2024-07-15 16:01:41.854912] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.126 [2024-07-15 16:01:41.872095] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1181934 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1181934 /var/tmp/bdevperf.sock 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1181934 ']' 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.691 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:15.691 "subsystems": [ 00:20:15.691 { 00:20:15.691 "subsystem": "keyring", 00:20:15.691 "config": [] 00:20:15.691 }, 00:20:15.691 { 00:20:15.691 "subsystem": "iobuf", 00:20:15.691 "config": [ 00:20:15.691 { 00:20:15.691 "method": "iobuf_set_options", 00:20:15.691 "params": { 00:20:15.691 "small_pool_count": 8192, 00:20:15.691 "large_pool_count": 1024, 00:20:15.691 "small_bufsize": 8192, 00:20:15.691 "large_bufsize": 135168 00:20:15.691 } 00:20:15.691 } 00:20:15.691 ] 00:20:15.691 }, 00:20:15.691 { 00:20:15.691 "subsystem": "sock", 00:20:15.691 "config": [ 00:20:15.691 { 00:20:15.691 "method": "sock_set_default_impl", 00:20:15.691 "params": { 00:20:15.691 "impl_name": "posix" 00:20:15.691 } 00:20:15.691 }, 00:20:15.691 { 00:20:15.691 "method": "sock_impl_set_options", 00:20:15.691 "params": { 00:20:15.691 "impl_name": "ssl", 00:20:15.691 "recv_buf_size": 4096, 00:20:15.691 "send_buf_size": 4096, 00:20:15.691 "enable_recv_pipe": true, 00:20:15.691 "enable_quickack": false, 00:20:15.691 "enable_placement_id": 0, 00:20:15.691 "enable_zerocopy_send_server": true, 00:20:15.691 "enable_zerocopy_send_client": false, 00:20:15.691 "zerocopy_threshold": 0, 00:20:15.691 "tls_version": 0, 00:20:15.691 "enable_ktls": false 00:20:15.691 } 00:20:15.691 }, 00:20:15.691 { 00:20:15.691 "method": "sock_impl_set_options", 00:20:15.691 "params": { 00:20:15.691 "impl_name": "posix", 00:20:15.691 "recv_buf_size": 2097152, 00:20:15.691 "send_buf_size": 2097152, 00:20:15.691 "enable_recv_pipe": true, 00:20:15.691 "enable_quickack": false, 00:20:15.691 "enable_placement_id": 0, 00:20:15.691 "enable_zerocopy_send_server": true, 00:20:15.691 "enable_zerocopy_send_client": false, 00:20:15.691 "zerocopy_threshold": 0, 00:20:15.691 "tls_version": 0, 00:20:15.691 "enable_ktls": false 00:20:15.691 } 00:20:15.691 } 00:20:15.691 ] 00:20:15.691 }, 00:20:15.691 { 00:20:15.691 "subsystem": "vmd", 00:20:15.691 "config": [] 00:20:15.691 }, 00:20:15.692 { 00:20:15.692 "subsystem": "accel", 00:20:15.692 "config": [ 00:20:15.692 { 00:20:15.692 "method": "accel_set_options", 00:20:15.692 "params": { 00:20:15.692 "small_cache_size": 128, 00:20:15.692 "large_cache_size": 16, 00:20:15.692 "task_count": 2048, 00:20:15.692 "sequence_count": 2048, 00:20:15.692 "buf_count": 2048 00:20:15.692 } 00:20:15.692 } 00:20:15.692 ] 00:20:15.692 }, 00:20:15.692 { 00:20:15.692 "subsystem": "bdev", 00:20:15.692 "config": [ 00:20:15.692 { 00:20:15.692 "method": "bdev_set_options", 00:20:15.692 "params": { 00:20:15.692 "bdev_io_pool_size": 65535, 00:20:15.692 "bdev_io_cache_size": 256, 00:20:15.692 "bdev_auto_examine": true, 00:20:15.692 "iobuf_small_cache_size": 128, 00:20:15.692 "iobuf_large_cache_size": 16 00:20:15.692 } 00:20:15.692 }, 00:20:15.692 { 00:20:15.692 "method": "bdev_raid_set_options", 00:20:15.692 "params": { 00:20:15.692 "process_window_size_kb": 1024 00:20:15.692 } 00:20:15.692 }, 00:20:15.692 { 00:20:15.692 "method": "bdev_iscsi_set_options", 00:20:15.692 "params": { 00:20:15.692 "timeout_sec": 30 00:20:15.692 } 00:20:15.692 }, 00:20:15.692 { 00:20:15.692 "method": "bdev_nvme_set_options", 00:20:15.692 "params": { 00:20:15.692 "action_on_timeout": "none", 00:20:15.692 "timeout_us": 0, 00:20:15.692 "timeout_admin_us": 0, 00:20:15.692 "keep_alive_timeout_ms": 10000, 00:20:15.692 "arbitration_burst": 0, 00:20:15.692 "low_priority_weight": 0, 00:20:15.692 "medium_priority_weight": 0, 00:20:15.692 "high_priority_weight": 0, 00:20:15.692 "nvme_adminq_poll_period_us": 10000, 00:20:15.692 "nvme_ioq_poll_period_us": 0, 00:20:15.692 "io_queue_requests": 512, 00:20:15.692 "delay_cmd_submit": true, 00:20:15.692 "transport_retry_count": 4, 00:20:15.692 "bdev_retry_count": 3, 00:20:15.692 "transport_ack_timeout": 0, 00:20:15.692 "ctrlr_loss_timeout_sec": 0, 00:20:15.692 "reconnect_delay_sec": 0, 00:20:15.692 "fast_io_fail_timeout_sec": 0, 00:20:15.692 "disable_auto_failback": false, 00:20:15.692 "generate_uuids": false, 00:20:15.692 "transport_tos": 0, 00:20:15.692 "nvme_error_stat": false, 00:20:15.692 "rdma_srq_size": 0, 00:20:15.692 "io_path_stat": false, 00:20:15.692 "allow_accel_sequence": false, 00:20:15.692 "rdma_max_cq_size": 0, 00:20:15.692 "rdma_cm_event_timeout_ms": 0, 00:20:15.692 "dhchap_digests": [ 00:20:15.692 "sha256", 00:20:15.692 "sha384", 00:20:15.692 "sha512" 00:20:15.692 ], 00:20:15.692 "dhchap_dhgroups": [ 00:20:15.692 "null", 00:20:15.692 "ffdhe2048", 00:20:15.692 "ffdhe3072", 00:20:15.692 "ffdhe4096", 00:20:15.692 "ffdhe6144", 00:20:15.692 "ffdhe8192" 00:20:15.692 ] 00:20:15.692 } 00:20:15.692 }, 00:20:15.692 { 00:20:15.692 "method": "bdev_nvme_attach_controller", 00:20:15.692 "params": { 00:20:15.692 "name": "TLSTEST", 00:20:15.692 "trtype": "TCP", 00:20:15.692 "adrfam": "IPv4", 00:20:15.692 "traddr": "10.0.0.2", 00:20:15.692 "trsvcid": "4420", 00:20:15.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.692 "prchk_reftag": false, 00:20:15.692 "prchk_guard": false, 00:20:15.692 "ctrlr_loss_timeout_sec": 0, 00:20:15.692 "reconnect_delay_sec": 0, 00:20:15.692 "fast_io_fail_timeout_sec": 0, 00:20:15.692 "psk": "/tmp/tmp.m1FmyOkO2m", 00:20:15.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.692 "hdgst": false, 00:20:15.692 "ddgst": false 00:20:15.692 } 00:20:15.692 }, 00:20:15.692 { 00:20:15.692 "method": "bdev_nvme_set_hotplug", 00:20:15.692 "params": { 00:20:15.692 "period_us": 100000, 00:20:15.692 "enable": false 00:20:15.692 } 00:20:15.692 }, 00:20:15.692 { 00:20:15.692 "method": "bdev_wait_for_examine" 00:20:15.692 } 00:20:15.692 ] 00:20:15.692 }, 00:20:15.692 { 00:20:15.692 "subsystem": "nbd", 00:20:15.692 "config": [] 00:20:15.692 } 00:20:15.692 ] 00:20:15.692 }' 00:20:15.692 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.692 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.692 [2024-07-15 16:01:42.433705] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:15.692 [2024-07-15 16:01:42.433787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181934 ] 00:20:15.692 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.692 [2024-07-15 16:01:42.494027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.692 [2024-07-15 16:01:42.600740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.949 [2024-07-15 16:01:42.770175] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.949 [2024-07-15 16:01:42.770325] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:16.512 16:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.512 16:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:16.512 16:01:43 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:16.769 Running I/O for 10 seconds... 00:20:26.730 00:20:26.730 Latency(us) 00:20:26.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.730 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:26.730 Verification LBA range: start 0x0 length 0x2000 00:20:26.730 TLSTESTn1 : 10.05 2465.85 9.63 0.00 0.00 51773.88 10777.03 78060.66 00:20:26.731 =================================================================================================================== 00:20:26.731 Total : 2465.85 9.63 0.00 0.00 51773.88 10777.03 78060.66 00:20:26.731 0 00:20:26.731 16:01:53 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:26.731 16:01:53 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1181934 00:20:26.731 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1181934 ']' 00:20:26.731 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1181934 00:20:26.731 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:26.731 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:26.731 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1181934 00:20:26.988 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:26.988 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:26.988 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1181934' 00:20:26.988 killing process with pid 1181934 00:20:26.988 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1181934 00:20:26.988 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.988 00:20:26.988 Latency(us) 00:20:26.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.988 =================================================================================================================== 00:20:26.988 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.988 [2024-07-15 16:01:53.673091] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:26.988 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1181934 00:20:27.246 16:01:53 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1181783 00:20:27.246 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1181783 ']' 00:20:27.246 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1181783 00:20:27.246 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:27.246 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.246 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1181783 00:20:27.246 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:27.246 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:27.246 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1181783' 00:20:27.246 killing process with pid 1181783 00:20:27.246 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1181783 00:20:27.246 [2024-07-15 16:01:53.969278] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:27.246 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1181783 00:20:27.504 16:01:54 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:27.505 16:01:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:27.505 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:27.505 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.505 16:01:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1183263 00:20:27.505 16:01:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:27.505 16:01:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1183263 00:20:27.505 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1183263 ']' 00:20:27.505 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.505 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:27.505 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.505 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:27.505 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.505 [2024-07-15 16:01:54.291734] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:27.505 [2024-07-15 16:01:54.291812] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.505 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.505 [2024-07-15 16:01:54.368322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.764 [2024-07-15 16:01:54.500517] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.764 [2024-07-15 16:01:54.500575] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.764 [2024-07-15 16:01:54.500617] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.764 [2024-07-15 16:01:54.500650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.764 [2024-07-15 16:01:54.500670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.764 [2024-07-15 16:01:54.500712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.764 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.764 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:27.764 16:01:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:27.764 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:27.764 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.764 16:01:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.764 16:01:54 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.m1FmyOkO2m 00:20:27.764 16:01:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.m1FmyOkO2m 00:20:27.764 16:01:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:28.021 [2024-07-15 16:01:54.860183] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.021 16:01:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:28.279 16:01:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:28.537 [2024-07-15 16:01:55.341485] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:28.537 [2024-07-15 16:01:55.341715] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.537 16:01:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:28.795 malloc0 00:20:28.795 16:01:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:29.053 16:01:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m1FmyOkO2m 00:20:29.311 [2024-07-15 16:01:56.091450] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:29.311 16:01:56 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1183542 00:20:29.311 16:01:56 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:29.311 16:01:56 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:29.311 16:01:56 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1183542 /var/tmp/bdevperf.sock 00:20:29.311 16:01:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1183542 ']' 00:20:29.311 16:01:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.311 16:01:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.311 16:01:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.311 16:01:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.311 16:01:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.311 [2024-07-15 16:01:56.155617] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:29.311 [2024-07-15 16:01:56.155687] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183542 ] 00:20:29.311 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.311 [2024-07-15 16:01:56.213884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.570 [2024-07-15 16:01:56.324405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.570 16:01:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.570 16:01:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:29.570 16:01:56 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.m1FmyOkO2m 00:20:29.828 16:01:56 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:30.085 [2024-07-15 16:01:56.949454] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.343 nvme0n1 00:20:30.343 16:01:57 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:30.343 Running I/O for 1 seconds... 00:20:31.275 00:20:31.275 Latency(us) 00:20:31.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.275 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:31.275 Verification LBA range: start 0x0 length 0x2000 00:20:31.275 nvme0n1 : 1.05 2321.22 9.07 0.00 0.00 54021.52 6359.42 93595.12 00:20:31.275 =================================================================================================================== 00:20:31.275 Total : 2321.22 9.07 0.00 0.00 54021.52 6359.42 93595.12 00:20:31.275 0 00:20:31.533 16:01:58 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1183542 00:20:31.533 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1183542 ']' 00:20:31.533 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1183542 00:20:31.533 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:31.533 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.533 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1183542 00:20:31.533 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:31.533 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:31.533 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1183542' 00:20:31.533 killing process with pid 1183542 00:20:31.533 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1183542 00:20:31.533 Received shutdown signal, test time was about 1.000000 seconds 00:20:31.533 00:20:31.533 Latency(us) 00:20:31.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.533 =================================================================================================================== 00:20:31.533 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:31.533 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1183542 00:20:31.792 16:01:58 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1183263 00:20:31.792 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1183263 ']' 00:20:31.792 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1183263 00:20:31.792 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:31.792 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.792 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1183263 00:20:31.792 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:31.792 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:31.792 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1183263' 00:20:31.792 killing process with pid 1183263 00:20:31.792 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1183263 00:20:31.792 [2024-07-15 16:01:58.553361] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:31.792 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1183263 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1183829 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1183829 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1183829 ']' 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.051 16:01:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.051 [2024-07-15 16:01:58.893047] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:32.051 [2024-07-15 16:01:58.893131] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.051 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.051 [2024-07-15 16:01:58.960271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.310 [2024-07-15 16:01:59.077989] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.310 [2024-07-15 16:01:59.078042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.310 [2024-07-15 16:01:59.078070] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.310 [2024-07-15 16:01:59.078081] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.310 [2024-07-15 16:01:59.078091] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.310 [2024-07-15 16:01:59.078126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.310 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.310 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:32.310 16:01:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:32.310 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:32.310 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.310 16:01:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.311 16:01:59 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:20:32.311 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.311 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.311 [2024-07-15 16:01:59.227722] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.569 malloc0 00:20:32.569 [2024-07-15 16:01:59.260239] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:32.569 [2024-07-15 16:01:59.260494] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.569 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.569 16:01:59 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1183967 00:20:32.569 16:01:59 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1183967 /var/tmp/bdevperf.sock 00:20:32.569 16:01:59 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:32.569 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1183967 ']' 00:20:32.569 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.569 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.570 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.570 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.570 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.570 [2024-07-15 16:01:59.332117] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:32.570 [2024-07-15 16:01:59.332202] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183967 ] 00:20:32.570 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.570 [2024-07-15 16:01:59.390445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.570 [2024-07-15 16:01:59.498553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.830 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.830 16:01:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:32.830 16:01:59 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.m1FmyOkO2m 00:20:33.088 16:01:59 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:33.347 [2024-07-15 16:02:00.112409] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.347 nvme0n1 00:20:33.347 16:02:00 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:33.604 Running I/O for 1 seconds... 00:20:34.538 00:20:34.538 Latency(us) 00:20:34.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.538 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:34.538 Verification LBA range: start 0x0 length 0x2000 00:20:34.538 nvme0n1 : 1.05 2413.00 9.43 0.00 0.00 51993.00 7524.50 76895.57 00:20:34.538 =================================================================================================================== 00:20:34.538 Total : 2413.00 9.43 0.00 0.00 51993.00 7524.50 76895.57 00:20:34.538 0 00:20:34.538 16:02:01 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:20:34.538 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.538 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.796 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.796 16:02:01 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:20:34.796 "subsystems": [ 00:20:34.796 { 00:20:34.796 "subsystem": "keyring", 00:20:34.796 "config": [ 00:20:34.796 { 00:20:34.796 "method": "keyring_file_add_key", 00:20:34.796 "params": { 00:20:34.796 "name": "key0", 00:20:34.796 "path": "/tmp/tmp.m1FmyOkO2m" 00:20:34.796 } 00:20:34.796 } 00:20:34.796 ] 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "subsystem": "iobuf", 00:20:34.796 "config": [ 00:20:34.796 { 00:20:34.796 "method": "iobuf_set_options", 00:20:34.796 "params": { 00:20:34.796 "small_pool_count": 8192, 00:20:34.796 "large_pool_count": 1024, 00:20:34.796 "small_bufsize": 8192, 00:20:34.796 "large_bufsize": 135168 00:20:34.796 } 00:20:34.796 } 00:20:34.796 ] 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "subsystem": "sock", 00:20:34.796 "config": [ 00:20:34.796 { 00:20:34.796 "method": "sock_set_default_impl", 00:20:34.796 "params": { 00:20:34.796 "impl_name": "posix" 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "sock_impl_set_options", 00:20:34.796 "params": { 00:20:34.796 "impl_name": "ssl", 00:20:34.796 "recv_buf_size": 4096, 00:20:34.796 "send_buf_size": 4096, 00:20:34.796 "enable_recv_pipe": true, 00:20:34.796 "enable_quickack": false, 00:20:34.796 "enable_placement_id": 0, 00:20:34.796 "enable_zerocopy_send_server": true, 00:20:34.796 "enable_zerocopy_send_client": false, 00:20:34.796 "zerocopy_threshold": 0, 00:20:34.796 "tls_version": 0, 00:20:34.796 "enable_ktls": false 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "sock_impl_set_options", 00:20:34.796 "params": { 00:20:34.796 "impl_name": "posix", 00:20:34.796 "recv_buf_size": 2097152, 00:20:34.796 "send_buf_size": 2097152, 00:20:34.796 "enable_recv_pipe": true, 00:20:34.796 "enable_quickack": false, 00:20:34.796 "enable_placement_id": 0, 00:20:34.796 "enable_zerocopy_send_server": true, 00:20:34.796 "enable_zerocopy_send_client": false, 00:20:34.796 "zerocopy_threshold": 0, 00:20:34.796 "tls_version": 0, 00:20:34.796 "enable_ktls": false 00:20:34.796 } 00:20:34.796 } 00:20:34.796 ] 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "subsystem": "vmd", 00:20:34.796 "config": [] 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "subsystem": "accel", 00:20:34.796 "config": [ 00:20:34.796 { 00:20:34.796 "method": "accel_set_options", 00:20:34.796 "params": { 00:20:34.796 "small_cache_size": 128, 00:20:34.796 "large_cache_size": 16, 00:20:34.796 "task_count": 2048, 00:20:34.796 "sequence_count": 2048, 00:20:34.796 "buf_count": 2048 00:20:34.796 } 00:20:34.796 } 00:20:34.796 ] 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "subsystem": "bdev", 00:20:34.796 "config": [ 00:20:34.796 { 00:20:34.796 "method": "bdev_set_options", 00:20:34.796 "params": { 00:20:34.796 "bdev_io_pool_size": 65535, 00:20:34.796 "bdev_io_cache_size": 256, 00:20:34.796 "bdev_auto_examine": true, 00:20:34.796 "iobuf_small_cache_size": 128, 00:20:34.796 "iobuf_large_cache_size": 16 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "bdev_raid_set_options", 00:20:34.796 "params": { 00:20:34.796 "process_window_size_kb": 1024 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "bdev_iscsi_set_options", 00:20:34.796 "params": { 00:20:34.796 "timeout_sec": 30 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "bdev_nvme_set_options", 00:20:34.796 "params": { 00:20:34.796 "action_on_timeout": "none", 00:20:34.796 "timeout_us": 0, 00:20:34.796 "timeout_admin_us": 0, 00:20:34.796 "keep_alive_timeout_ms": 10000, 00:20:34.796 "arbitration_burst": 0, 00:20:34.796 "low_priority_weight": 0, 00:20:34.796 "medium_priority_weight": 0, 00:20:34.796 "high_priority_weight": 0, 00:20:34.796 "nvme_adminq_poll_period_us": 10000, 00:20:34.796 "nvme_ioq_poll_period_us": 0, 00:20:34.796 "io_queue_requests": 0, 00:20:34.796 "delay_cmd_submit": true, 00:20:34.796 "transport_retry_count": 4, 00:20:34.796 "bdev_retry_count": 3, 00:20:34.796 "transport_ack_timeout": 0, 00:20:34.796 "ctrlr_loss_timeout_sec": 0, 00:20:34.796 "reconnect_delay_sec": 0, 00:20:34.796 "fast_io_fail_timeout_sec": 0, 00:20:34.796 "disable_auto_failback": false, 00:20:34.796 "generate_uuids": false, 00:20:34.796 "transport_tos": 0, 00:20:34.796 "nvme_error_stat": false, 00:20:34.796 "rdma_srq_size": 0, 00:20:34.796 "io_path_stat": false, 00:20:34.796 "allow_accel_sequence": false, 00:20:34.796 "rdma_max_cq_size": 0, 00:20:34.796 "rdma_cm_event_timeout_ms": 0, 00:20:34.796 "dhchap_digests": [ 00:20:34.796 "sha256", 00:20:34.796 "sha384", 00:20:34.796 "sha512" 00:20:34.796 ], 00:20:34.796 "dhchap_dhgroups": [ 00:20:34.796 "null", 00:20:34.796 "ffdhe2048", 00:20:34.796 "ffdhe3072", 00:20:34.796 "ffdhe4096", 00:20:34.796 "ffdhe6144", 00:20:34.796 "ffdhe8192" 00:20:34.796 ] 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "bdev_nvme_set_hotplug", 00:20:34.796 "params": { 00:20:34.796 "period_us": 100000, 00:20:34.796 "enable": false 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "bdev_malloc_create", 00:20:34.796 "params": { 00:20:34.796 "name": "malloc0", 00:20:34.796 "num_blocks": 8192, 00:20:34.796 "block_size": 4096, 00:20:34.796 "physical_block_size": 4096, 00:20:34.796 "uuid": "e53bb4ae-e165-42d0-902c-d5b536d82ebf", 00:20:34.796 "optimal_io_boundary": 0 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "bdev_wait_for_examine" 00:20:34.796 } 00:20:34.796 ] 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "subsystem": "nbd", 00:20:34.796 "config": [] 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "subsystem": "scheduler", 00:20:34.796 "config": [ 00:20:34.796 { 00:20:34.796 "method": "framework_set_scheduler", 00:20:34.796 "params": { 00:20:34.796 "name": "static" 00:20:34.796 } 00:20:34.796 } 00:20:34.796 ] 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "subsystem": "nvmf", 00:20:34.796 "config": [ 00:20:34.796 { 00:20:34.796 "method": "nvmf_set_config", 00:20:34.796 "params": { 00:20:34.796 "discovery_filter": "match_any", 00:20:34.796 "admin_cmd_passthru": { 00:20:34.796 "identify_ctrlr": false 00:20:34.796 } 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "nvmf_set_max_subsystems", 00:20:34.796 "params": { 00:20:34.796 "max_subsystems": 1024 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "nvmf_set_crdt", 00:20:34.796 "params": { 00:20:34.796 "crdt1": 0, 00:20:34.796 "crdt2": 0, 00:20:34.796 "crdt3": 0 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "nvmf_create_transport", 00:20:34.796 "params": { 00:20:34.796 "trtype": "TCP", 00:20:34.796 "max_queue_depth": 128, 00:20:34.796 "max_io_qpairs_per_ctrlr": 127, 00:20:34.796 "in_capsule_data_size": 4096, 00:20:34.796 "max_io_size": 131072, 00:20:34.796 "io_unit_size": 131072, 00:20:34.796 "max_aq_depth": 128, 00:20:34.796 "num_shared_buffers": 511, 00:20:34.796 "buf_cache_size": 4294967295, 00:20:34.796 "dif_insert_or_strip": false, 00:20:34.796 "zcopy": false, 00:20:34.796 "c2h_success": false, 00:20:34.796 "sock_priority": 0, 00:20:34.796 "abort_timeout_sec": 1, 00:20:34.796 "ack_timeout": 0, 00:20:34.796 "data_wr_pool_size": 0 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "nvmf_create_subsystem", 00:20:34.796 "params": { 00:20:34.796 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.796 "allow_any_host": false, 00:20:34.796 "serial_number": "00000000000000000000", 00:20:34.796 "model_number": "SPDK bdev Controller", 00:20:34.796 "max_namespaces": 32, 00:20:34.796 "min_cntlid": 1, 00:20:34.796 "max_cntlid": 65519, 00:20:34.796 "ana_reporting": false 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "nvmf_subsystem_add_host", 00:20:34.796 "params": { 00:20:34.796 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.796 "host": "nqn.2016-06.io.spdk:host1", 00:20:34.796 "psk": "key0" 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "nvmf_subsystem_add_ns", 00:20:34.796 "params": { 00:20:34.796 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.796 "namespace": { 00:20:34.796 "nsid": 1, 00:20:34.796 "bdev_name": "malloc0", 00:20:34.796 "nguid": "E53BB4AEE16542D0902CD5B536D82EBF", 00:20:34.796 "uuid": "e53bb4ae-e165-42d0-902c-d5b536d82ebf", 00:20:34.796 "no_auto_visible": false 00:20:34.796 } 00:20:34.796 } 00:20:34.796 }, 00:20:34.796 { 00:20:34.796 "method": "nvmf_subsystem_add_listener", 00:20:34.796 "params": { 00:20:34.796 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.796 "listen_address": { 00:20:34.796 "trtype": "TCP", 00:20:34.796 "adrfam": "IPv4", 00:20:34.796 "traddr": "10.0.0.2", 00:20:34.796 "trsvcid": "4420" 00:20:34.796 }, 00:20:34.796 "secure_channel": false, 00:20:34.796 "sock_impl": "ssl" 00:20:34.796 } 00:20:34.796 } 00:20:34.796 ] 00:20:34.796 } 00:20:34.796 ] 00:20:34.796 }' 00:20:34.797 16:02:01 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:35.055 16:02:01 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:20:35.055 "subsystems": [ 00:20:35.055 { 00:20:35.055 "subsystem": "keyring", 00:20:35.055 "config": [ 00:20:35.055 { 00:20:35.055 "method": "keyring_file_add_key", 00:20:35.055 "params": { 00:20:35.055 "name": "key0", 00:20:35.055 "path": "/tmp/tmp.m1FmyOkO2m" 00:20:35.055 } 00:20:35.055 } 00:20:35.055 ] 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "subsystem": "iobuf", 00:20:35.055 "config": [ 00:20:35.055 { 00:20:35.055 "method": "iobuf_set_options", 00:20:35.055 "params": { 00:20:35.055 "small_pool_count": 8192, 00:20:35.055 "large_pool_count": 1024, 00:20:35.055 "small_bufsize": 8192, 00:20:35.055 "large_bufsize": 135168 00:20:35.055 } 00:20:35.055 } 00:20:35.055 ] 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "subsystem": "sock", 00:20:35.055 "config": [ 00:20:35.055 { 00:20:35.055 "method": "sock_set_default_impl", 00:20:35.055 "params": { 00:20:35.055 "impl_name": "posix" 00:20:35.055 } 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "method": "sock_impl_set_options", 00:20:35.055 "params": { 00:20:35.055 "impl_name": "ssl", 00:20:35.055 "recv_buf_size": 4096, 00:20:35.055 "send_buf_size": 4096, 00:20:35.055 "enable_recv_pipe": true, 00:20:35.055 "enable_quickack": false, 00:20:35.055 "enable_placement_id": 0, 00:20:35.055 "enable_zerocopy_send_server": true, 00:20:35.055 "enable_zerocopy_send_client": false, 00:20:35.055 "zerocopy_threshold": 0, 00:20:35.055 "tls_version": 0, 00:20:35.055 "enable_ktls": false 00:20:35.055 } 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "method": "sock_impl_set_options", 00:20:35.055 "params": { 00:20:35.055 "impl_name": "posix", 00:20:35.055 "recv_buf_size": 2097152, 00:20:35.055 "send_buf_size": 2097152, 00:20:35.055 "enable_recv_pipe": true, 00:20:35.055 "enable_quickack": false, 00:20:35.055 "enable_placement_id": 0, 00:20:35.055 "enable_zerocopy_send_server": true, 00:20:35.055 "enable_zerocopy_send_client": false, 00:20:35.055 "zerocopy_threshold": 0, 00:20:35.055 "tls_version": 0, 00:20:35.055 "enable_ktls": false 00:20:35.055 } 00:20:35.055 } 00:20:35.055 ] 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "subsystem": "vmd", 00:20:35.055 "config": [] 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "subsystem": "accel", 00:20:35.055 "config": [ 00:20:35.055 { 00:20:35.055 "method": "accel_set_options", 00:20:35.055 "params": { 00:20:35.055 "small_cache_size": 128, 00:20:35.055 "large_cache_size": 16, 00:20:35.055 "task_count": 2048, 00:20:35.055 "sequence_count": 2048, 00:20:35.055 "buf_count": 2048 00:20:35.055 } 00:20:35.055 } 00:20:35.055 ] 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "subsystem": "bdev", 00:20:35.055 "config": [ 00:20:35.055 { 00:20:35.055 "method": "bdev_set_options", 00:20:35.055 "params": { 00:20:35.055 "bdev_io_pool_size": 65535, 00:20:35.055 "bdev_io_cache_size": 256, 00:20:35.055 "bdev_auto_examine": true, 00:20:35.055 "iobuf_small_cache_size": 128, 00:20:35.055 "iobuf_large_cache_size": 16 00:20:35.055 } 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "method": "bdev_raid_set_options", 00:20:35.055 "params": { 00:20:35.055 "process_window_size_kb": 1024 00:20:35.055 } 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "method": "bdev_iscsi_set_options", 00:20:35.055 "params": { 00:20:35.055 "timeout_sec": 30 00:20:35.055 } 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "method": "bdev_nvme_set_options", 00:20:35.055 "params": { 00:20:35.055 "action_on_timeout": "none", 00:20:35.055 "timeout_us": 0, 00:20:35.055 "timeout_admin_us": 0, 00:20:35.055 "keep_alive_timeout_ms": 10000, 00:20:35.055 "arbitration_burst": 0, 00:20:35.055 "low_priority_weight": 0, 00:20:35.055 "medium_priority_weight": 0, 00:20:35.055 "high_priority_weight": 0, 00:20:35.055 "nvme_adminq_poll_period_us": 10000, 00:20:35.055 "nvme_ioq_poll_period_us": 0, 00:20:35.055 "io_queue_requests": 512, 00:20:35.055 "delay_cmd_submit": true, 00:20:35.055 "transport_retry_count": 4, 00:20:35.055 "bdev_retry_count": 3, 00:20:35.055 "transport_ack_timeout": 0, 00:20:35.055 "ctrlr_loss_timeout_sec": 0, 00:20:35.055 "reconnect_delay_sec": 0, 00:20:35.055 "fast_io_fail_timeout_sec": 0, 00:20:35.055 "disable_auto_failback": false, 00:20:35.055 "generate_uuids": false, 00:20:35.055 "transport_tos": 0, 00:20:35.055 "nvme_error_stat": false, 00:20:35.055 "rdma_srq_size": 0, 00:20:35.055 "io_path_stat": false, 00:20:35.055 "allow_accel_sequence": false, 00:20:35.055 "rdma_max_cq_size": 0, 00:20:35.055 "rdma_cm_event_timeout_ms": 0, 00:20:35.055 "dhchap_digests": [ 00:20:35.055 "sha256", 00:20:35.055 "sha384", 00:20:35.055 "sha512" 00:20:35.055 ], 00:20:35.055 "dhchap_dhgroups": [ 00:20:35.055 "null", 00:20:35.055 "ffdhe2048", 00:20:35.055 "ffdhe3072", 00:20:35.055 "ffdhe4096", 00:20:35.055 "ffdhe6144", 00:20:35.055 "ffdhe8192" 00:20:35.055 ] 00:20:35.055 } 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "method": "bdev_nvme_attach_controller", 00:20:35.055 "params": { 00:20:35.055 "name": "nvme0", 00:20:35.055 "trtype": "TCP", 00:20:35.055 "adrfam": "IPv4", 00:20:35.055 "traddr": "10.0.0.2", 00:20:35.055 "trsvcid": "4420", 00:20:35.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.055 "prchk_reftag": false, 00:20:35.055 "prchk_guard": false, 00:20:35.055 "ctrlr_loss_timeout_sec": 0, 00:20:35.055 "reconnect_delay_sec": 0, 00:20:35.055 "fast_io_fail_timeout_sec": 0, 00:20:35.055 "psk": "key0", 00:20:35.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.055 "hdgst": false, 00:20:35.055 "ddgst": false 00:20:35.055 } 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "method": "bdev_nvme_set_hotplug", 00:20:35.055 "params": { 00:20:35.055 "period_us": 100000, 00:20:35.055 "enable": false 00:20:35.055 } 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "method": "bdev_enable_histogram", 00:20:35.055 "params": { 00:20:35.055 "name": "nvme0n1", 00:20:35.055 "enable": true 00:20:35.055 } 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "method": "bdev_wait_for_examine" 00:20:35.055 } 00:20:35.055 ] 00:20:35.055 }, 00:20:35.055 { 00:20:35.055 "subsystem": "nbd", 00:20:35.055 "config": [] 00:20:35.055 } 00:20:35.055 ] 00:20:35.055 }' 00:20:35.055 16:02:01 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 1183967 00:20:35.055 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1183967 ']' 00:20:35.055 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1183967 00:20:35.055 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:35.055 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.055 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1183967 00:20:35.055 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:35.055 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:35.055 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1183967' 00:20:35.055 killing process with pid 1183967 00:20:35.055 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1183967 00:20:35.055 Received shutdown signal, test time was about 1.000000 seconds 00:20:35.055 00:20:35.055 Latency(us) 00:20:35.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.055 =================================================================================================================== 00:20:35.055 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.055 16:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1183967 00:20:35.313 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 1183829 00:20:35.313 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1183829 ']' 00:20:35.313 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1183829 00:20:35.313 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:35.313 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.313 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1183829 00:20:35.313 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:35.313 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:35.313 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1183829' 00:20:35.313 killing process with pid 1183829 00:20:35.313 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1183829 00:20:35.313 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1183829 00:20:35.571 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:35.571 16:02:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:35.571 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:35.571 "subsystems": [ 00:20:35.571 { 00:20:35.571 "subsystem": "keyring", 00:20:35.571 "config": [ 00:20:35.571 { 00:20:35.571 "method": "keyring_file_add_key", 00:20:35.571 "params": { 00:20:35.571 "name": "key0", 00:20:35.571 "path": "/tmp/tmp.m1FmyOkO2m" 00:20:35.571 } 00:20:35.571 } 00:20:35.571 ] 00:20:35.571 }, 00:20:35.571 { 00:20:35.571 "subsystem": "iobuf", 00:20:35.571 "config": [ 00:20:35.571 { 00:20:35.571 "method": "iobuf_set_options", 00:20:35.571 "params": { 00:20:35.571 "small_pool_count": 8192, 00:20:35.571 "large_pool_count": 1024, 00:20:35.571 "small_bufsize": 8192, 00:20:35.571 "large_bufsize": 135168 00:20:35.571 } 00:20:35.571 } 00:20:35.571 ] 00:20:35.571 }, 00:20:35.571 { 00:20:35.571 "subsystem": "sock", 00:20:35.571 "config": [ 00:20:35.571 { 00:20:35.571 "method": "sock_set_default_impl", 00:20:35.571 "params": { 00:20:35.571 "impl_name": "posix" 00:20:35.571 } 00:20:35.571 }, 00:20:35.571 { 00:20:35.571 "method": "sock_impl_set_options", 00:20:35.571 "params": { 00:20:35.571 "impl_name": "ssl", 00:20:35.572 "recv_buf_size": 4096, 00:20:35.572 "send_buf_size": 4096, 00:20:35.572 "enable_recv_pipe": true, 00:20:35.572 "enable_quickack": false, 00:20:35.572 "enable_placement_id": 0, 00:20:35.572 "enable_zerocopy_send_server": true, 00:20:35.572 "enable_zerocopy_send_client": false, 00:20:35.572 "zerocopy_threshold": 0, 00:20:35.572 "tls_version": 0, 00:20:35.572 "enable_ktls": false 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "sock_impl_set_options", 00:20:35.572 "params": { 00:20:35.572 "impl_name": "posix", 00:20:35.572 "recv_buf_size": 2097152, 00:20:35.572 "send_buf_size": 2097152, 00:20:35.572 "enable_recv_pipe": true, 00:20:35.572 "enable_quickack": false, 00:20:35.572 "enable_placement_id": 0, 00:20:35.572 "enable_zerocopy_send_server": true, 00:20:35.572 "enable_zerocopy_send_client": false, 00:20:35.572 "zerocopy_threshold": 0, 00:20:35.572 "tls_version": 0, 00:20:35.572 "enable_ktls": false 00:20:35.572 } 00:20:35.572 } 00:20:35.572 ] 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "subsystem": "vmd", 00:20:35.572 "config": [] 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "subsystem": "accel", 00:20:35.572 "config": [ 00:20:35.572 { 00:20:35.572 "method": "accel_set_options", 00:20:35.572 "params": { 00:20:35.572 "small_cache_size": 128, 00:20:35.572 "large_cache_size": 16, 00:20:35.572 "task_count": 2048, 00:20:35.572 "sequence_count": 2048, 00:20:35.572 "buf_count": 2048 00:20:35.572 } 00:20:35.572 } 00:20:35.572 ] 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "subsystem": "bdev", 00:20:35.572 "config": [ 00:20:35.572 { 00:20:35.572 "method": "bdev_set_options", 00:20:35.572 "params": { 00:20:35.572 "bdev_io_pool_size": 65535, 00:20:35.572 "bdev_io_cache_size": 256, 00:20:35.572 "bdev_auto_examine": true, 00:20:35.572 "iobuf_small_cache_size": 128, 00:20:35.572 "iobuf_large_cache_size": 16 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "bdev_raid_set_options", 00:20:35.572 "params": { 00:20:35.572 "process_window_size_kb": 1024 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "bdev_iscsi_set_options", 00:20:35.572 "params": { 00:20:35.572 "timeout_sec": 30 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "bdev_nvme_set_options", 00:20:35.572 "params": { 00:20:35.572 "action_on_timeout": "none", 00:20:35.572 "timeout_us": 0, 00:20:35.572 "timeout_admin_us": 0, 00:20:35.572 "keep_alive_timeout_ms": 10000, 00:20:35.572 "arbitration_burst": 0, 00:20:35.572 "low_priority_weight": 0, 00:20:35.572 "medium_priority_weight": 0, 00:20:35.572 "high_priority_weight": 0, 00:20:35.572 "nvme_adminq_poll_period_us": 10000, 00:20:35.572 "nvme_ioq_poll_period_us": 0, 00:20:35.572 "io_queue_requests": 0, 00:20:35.572 "delay_cmd_submit": true, 00:20:35.572 "transport_retry_count": 4, 00:20:35.572 "bdev_retry_count": 3, 00:20:35.572 "transport_ack_timeout": 0, 00:20:35.572 "ctrlr_loss_timeout_sec": 0, 00:20:35.572 "reconnect_delay_sec": 0, 00:20:35.572 "fast_io_fail_timeout_sec": 0, 00:20:35.572 "disable_auto_failback": false, 00:20:35.572 "generate_uuids": false, 00:20:35.572 "transport_tos": 0, 00:20:35.572 "nvme_error_stat": false, 00:20:35.572 "rdma_srq_size": 0, 00:20:35.572 "io_path_stat": false, 00:20:35.572 "allow_accel_sequence": false, 00:20:35.572 "rdma_max_cq_size": 0, 00:20:35.572 "rdma_cm_event_timeout_ms": 0, 00:20:35.572 "dhchap_digests": [ 00:20:35.572 "sha256", 00:20:35.572 "sha384", 00:20:35.572 "sha512" 00:20:35.572 ], 00:20:35.572 "dhchap_dhgroups": [ 00:20:35.572 "null", 00:20:35.572 "ffdhe2048", 00:20:35.572 "ffdhe3072", 00:20:35.572 "ffdhe4096", 00:20:35.572 "ffdhe6144", 00:20:35.572 "ffdhe8192" 00:20:35.572 ] 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "bdev_nvme_set_hotplug", 00:20:35.572 "params": { 00:20:35.572 "period_us": 100000, 00:20:35.572 "enable": false 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "bdev_malloc_create", 00:20:35.572 "params": { 00:20:35.572 "name": "malloc0", 00:20:35.572 "num_blocks": 8192, 00:20:35.572 "block_size": 4096, 00:20:35.572 "physical_block_size": 4096, 00:20:35.572 "uuid": "e53bb4ae-e165-42d0-902c-d5b536d82ebf", 00:20:35.572 "optimal_io_boundary": 0 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "bdev_wait_for_examine" 00:20:35.572 } 00:20:35.572 ] 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "subsystem": "nbd", 00:20:35.572 "config": [] 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "subsystem": "scheduler", 00:20:35.572 "config": [ 00:20:35.572 { 00:20:35.572 "method": "framework_set_scheduler", 00:20:35.572 "params": { 00:20:35.572 "name": "static" 00:20:35.572 } 00:20:35.572 } 00:20:35.572 ] 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "subsystem": "nvmf", 00:20:35.572 "config": [ 00:20:35.572 { 00:20:35.572 "method": "nvmf_set_config", 00:20:35.572 "params": { 00:20:35.572 "discovery_filter": "match_any", 00:20:35.572 "admin_cmd_passthru": { 00:20:35.572 "identify_ctrlr": false 00:20:35.572 } 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "nvmf_set_max_subsystems", 00:20:35.572 "params": { 00:20:35.572 "max_subsystems": 1024 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "nvmf_set_crdt", 00:20:35.572 "params": { 00:20:35.572 "crdt1": 0, 00:20:35.572 "crdt2": 0, 00:20:35.572 "crdt3": 0 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "nvmf_create_transport", 00:20:35.572 "params": { 00:20:35.572 "trtype": "TCP", 00:20:35.572 "max_queue_depth": 128, 00:20:35.572 "max_io_qpairs_per_ctrlr": 127, 00:20:35.572 "in_capsule_data_size": 4096, 00:20:35.572 "max_io_size": 131072, 00:20:35.572 "io_unit_size": 131072, 00:20:35.572 "max_aq_depth": 128, 00:20:35.572 "num_shared_buffers": 511, 00:20:35.572 "buf_cache_size": 4294967295, 00:20:35.572 "dif_insert_or_strip": false, 00:20:35.572 "zcopy": false, 00:20:35.572 "c2h_success": false, 00:20:35.572 "sock_priority": 0, 00:20:35.572 "abort_timeout_sec": 1, 00:20:35.572 "ack_timeout": 0, 00:20:35.572 "data_wr_pool_size": 0 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "nvmf_create_subsystem", 00:20:35.572 "params": { 00:20:35.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.572 "allow_any_host": false, 00:20:35.572 "serial_number": "00000000000000000000", 00:20:35.572 "model_number": "SPDK bdev Controller", 00:20:35.572 "max_namespaces": 32, 00:20:35.572 "min_cntlid": 1, 00:20:35.572 "max_cntlid": 65519, 00:20:35.572 "ana_reporting": false 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "nvmf_subsystem_add_host", 00:20:35.572 "params": { 00:20:35.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.572 "host": "nqn.2016-06.io.spdk:host1", 00:20:35.572 "psk": "key0" 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "nvmf_subsystem_add_ns", 00:20:35.572 "params": { 00:20:35.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.572 "namespace": { 00:20:35.572 "nsid": 1, 00:20:35.572 "bdev_name": "malloc0", 00:20:35.572 "nguid": "E53BB4AEE16542D0902CD5B536D82EBF", 00:20:35.572 "uuid": "e53bb4ae-e165-42d0-902c-d5b536d82ebf", 00:20:35.572 "no_auto_visible": false 00:20:35.572 } 00:20:35.572 } 00:20:35.572 }, 00:20:35.572 { 00:20:35.572 "method": "nvmf_subsystem_add_listener", 00:20:35.572 "params": { 00:20:35.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.572 "listen_address": { 00:20:35.572 "trtype": "TCP", 00:20:35.572 "adrfam": "IPv4", 00:20:35.572 "traddr": "10.0.0.2", 00:20:35.572 "trsvcid": "4420" 00:20:35.572 }, 00:20:35.572 "secure_channel": false, 00:20:35.572 "sock_impl": "ssl" 00:20:35.572 } 00:20:35.572 } 00:20:35.572 ] 00:20:35.572 } 00:20:35.572 ] 00:20:35.572 }' 00:20:35.572 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:35.572 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.572 16:02:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1184351 00:20:35.572 16:02:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:35.572 16:02:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1184351 00:20:35.572 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1184351 ']' 00:20:35.572 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.572 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.572 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.572 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.572 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.831 [2024-07-15 16:02:02.548585] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:35.831 [2024-07-15 16:02:02.548676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.831 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.831 [2024-07-15 16:02:02.612716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.831 [2024-07-15 16:02:02.717371] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.831 [2024-07-15 16:02:02.717424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.831 [2024-07-15 16:02:02.717437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.831 [2024-07-15 16:02:02.717448] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.831 [2024-07-15 16:02:02.717458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.831 [2024-07-15 16:02:02.717532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.090 [2024-07-15 16:02:02.962262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.090 [2024-07-15 16:02:02.994284] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:36.090 [2024-07-15 16:02:03.003049] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.655 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.655 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:36.655 16:02:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:36.655 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.655 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.655 16:02:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.655 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1184410 00:20:36.655 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1184410 /var/tmp/bdevperf.sock 00:20:36.655 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1184410 ']' 00:20:36.655 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.655 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:36.655 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.656 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:36.656 "subsystems": [ 00:20:36.656 { 00:20:36.656 "subsystem": "keyring", 00:20:36.656 "config": [ 00:20:36.656 { 00:20:36.656 "method": "keyring_file_add_key", 00:20:36.656 "params": { 00:20:36.656 "name": "key0", 00:20:36.656 "path": "/tmp/tmp.m1FmyOkO2m" 00:20:36.656 } 00:20:36.656 } 00:20:36.656 ] 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "subsystem": "iobuf", 00:20:36.656 "config": [ 00:20:36.656 { 00:20:36.656 "method": "iobuf_set_options", 00:20:36.656 "params": { 00:20:36.656 "small_pool_count": 8192, 00:20:36.656 "large_pool_count": 1024, 00:20:36.656 "small_bufsize": 8192, 00:20:36.656 "large_bufsize": 135168 00:20:36.656 } 00:20:36.656 } 00:20:36.656 ] 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "subsystem": "sock", 00:20:36.656 "config": [ 00:20:36.656 { 00:20:36.656 "method": "sock_set_default_impl", 00:20:36.656 "params": { 00:20:36.656 "impl_name": "posix" 00:20:36.656 } 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "method": "sock_impl_set_options", 00:20:36.656 "params": { 00:20:36.656 "impl_name": "ssl", 00:20:36.656 "recv_buf_size": 4096, 00:20:36.656 "send_buf_size": 4096, 00:20:36.656 "enable_recv_pipe": true, 00:20:36.656 "enable_quickack": false, 00:20:36.656 "enable_placement_id": 0, 00:20:36.656 "enable_zerocopy_send_server": true, 00:20:36.656 "enable_zerocopy_send_client": false, 00:20:36.656 "zerocopy_threshold": 0, 00:20:36.656 "tls_version": 0, 00:20:36.656 "enable_ktls": false 00:20:36.656 } 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "method": "sock_impl_set_options", 00:20:36.656 "params": { 00:20:36.656 "impl_name": "posix", 00:20:36.656 "recv_buf_size": 2097152, 00:20:36.656 "send_buf_size": 2097152, 00:20:36.656 "enable_recv_pipe": true, 00:20:36.656 "enable_quickack": false, 00:20:36.656 "enable_placement_id": 0, 00:20:36.656 "enable_zerocopy_send_server": true, 00:20:36.656 "enable_zerocopy_send_client": false, 00:20:36.656 "zerocopy_threshold": 0, 00:20:36.656 "tls_version": 0, 00:20:36.656 "enable_ktls": false 00:20:36.656 } 00:20:36.656 } 00:20:36.656 ] 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "subsystem": "vmd", 00:20:36.656 "config": [] 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "subsystem": "accel", 00:20:36.656 "config": [ 00:20:36.656 { 00:20:36.656 "method": "accel_set_options", 00:20:36.656 "params": { 00:20:36.656 "small_cache_size": 128, 00:20:36.656 "large_cache_size": 16, 00:20:36.656 "task_count": 2048, 00:20:36.656 "sequence_count": 2048, 00:20:36.656 "buf_count": 2048 00:20:36.656 } 00:20:36.656 } 00:20:36.656 ] 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "subsystem": "bdev", 00:20:36.656 "config": [ 00:20:36.656 { 00:20:36.656 "method": "bdev_set_options", 00:20:36.656 "params": { 00:20:36.656 "bdev_io_pool_size": 65535, 00:20:36.656 "bdev_io_cache_size": 256, 00:20:36.656 "bdev_auto_examine": true, 00:20:36.656 "iobuf_small_cache_size": 128, 00:20:36.656 "iobuf_large_cache_size": 16 00:20:36.656 } 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "method": "bdev_raid_set_options", 00:20:36.656 "params": { 00:20:36.656 "process_window_size_kb": 1024 00:20:36.656 } 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "method": "bdev_iscsi_set_options", 00:20:36.656 "params": { 00:20:36.656 "timeout_sec": 30 00:20:36.656 } 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "method": "bdev_nvme_set_options", 00:20:36.656 "params": { 00:20:36.656 "action_on_timeout": "none", 00:20:36.656 "timeout_us": 0, 00:20:36.656 "timeout_admin_us": 0, 00:20:36.656 "keep_alive_timeout_ms": 10000, 00:20:36.656 "arbitration_burst": 0, 00:20:36.656 "low_priority_weight": 0, 00:20:36.656 "medium_priority_weight": 0, 00:20:36.656 "high_priority_weight": 0, 00:20:36.656 "nvme_adminq_poll_period_us": 10000, 00:20:36.656 "nvme_ioq_poll_period_us": 0, 00:20:36.656 "io_queue_requests": 512, 00:20:36.656 "delay_cmd_submit": true, 00:20:36.656 "transport_retry_count": 4, 00:20:36.656 "bdev_retry_count": 3, 00:20:36.656 "transport_ack_timeout": 0, 00:20:36.656 "ctrlr_loss_timeout_sec": 0, 00:20:36.656 "reconnect_delay_sec": 0, 00:20:36.656 "fast_io_fail_timeout_sec": 0, 00:20:36.656 "disable_auto_failback": false, 00:20:36.656 "generate_uuids": false, 00:20:36.656 "transport_tos": 0, 00:20:36.656 "nvme_error_stat": false, 00:20:36.656 "rdma_srq_size": 0, 00:20:36.656 "io_path_stat": false, 00:20:36.656 "allow_accel_sequence": false, 00:20:36.656 "rdma_max_cq_size": 0, 00:20:36.656 "rdma_cm_event_timeout_ms": 0, 00:20:36.656 "dhchap_digests": [ 00:20:36.656 "sha256", 00:20:36.656 "sha384", 00:20:36.656 "sha512" 00:20:36.656 ], 00:20:36.656 "dhchap_dhgroups": [ 00:20:36.656 "null", 00:20:36.656 "ffdhe2048", 00:20:36.656 "ffdhe3072", 00:20:36.656 "ffdhe4096", 00:20:36.656 "ffdhe6144", 00:20:36.656 "ffdhe8192" 00:20:36.656 ] 00:20:36.656 } 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "method": "bdev_nvme_attach_controller", 00:20:36.656 "params": { 00:20:36.656 "name": "nvme0", 00:20:36.656 "trtype": "TCP", 00:20:36.656 "adrfam": "IPv4", 00:20:36.656 "traddr": "10.0.0.2", 00:20:36.656 "trsvcid": "4420", 00:20:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.656 "prchk_reftag": false, 00:20:36.656 "prchk_guard": false, 00:20:36.656 "ctrlr_loss_timeout_sec": 0, 00:20:36.656 "reconnect_delay_sec": 0, 00:20:36.656 "fast_io_fail_timeout_sec": 0, 00:20:36.656 "psk": "key0", 00:20:36.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.656 "hdgst": false, 00:20:36.656 "ddgst": false 00:20:36.656 } 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "method": "bdev_nvme_set_hotplug", 00:20:36.656 "params": { 00:20:36.656 "period_us": 100000, 00:20:36.656 "enable": false 00:20:36.656 } 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "method": "bdev_enable_histogram", 00:20:36.656 "params": { 00:20:36.656 "name": "nvme0n1", 00:20:36.656 "enable": true 00:20:36.656 } 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "method": "bdev_wait_for_examine" 00:20:36.656 } 00:20:36.656 ] 00:20:36.656 }, 00:20:36.656 { 00:20:36.656 "subsystem": "nbd", 00:20:36.656 "config": [] 00:20:36.656 } 00:20:36.656 ] 00:20:36.656 }' 00:20:36.656 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.656 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.656 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.656 [2024-07-15 16:02:03.552173] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:36.656 [2024-07-15 16:02:03.552261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184410 ] 00:20:36.656 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.914 [2024-07-15 16:02:03.616437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.914 [2024-07-15 16:02:03.734515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.174 [2024-07-15 16:02:03.920268] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.776 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.776 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:37.776 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:37.776 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:38.081 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.081 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:38.081 Running I/O for 1 seconds... 00:20:39.448 00:20:39.448 Latency(us) 00:20:39.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.448 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:39.448 Verification LBA range: start 0x0 length 0x2000 00:20:39.449 nvme0n1 : 1.07 1667.85 6.52 0.00 0.00 74669.97 6893.42 112624.83 00:20:39.449 =================================================================================================================== 00:20:39.449 Total : 1667.85 6.52 0.00 0.00 74669.97 6893.42 112624.83 00:20:39.449 0 00:20:39.449 16:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:39.449 16:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:39.449 16:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:39.449 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:39.449 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:39.449 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:39.449 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:39.449 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:39.449 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:39.449 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:39.449 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:39.449 nvmf_trace.0 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1184410 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1184410 ']' 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1184410 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1184410 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1184410' 00:20:39.449 killing process with pid 1184410 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1184410 00:20:39.449 Received shutdown signal, test time was about 1.000000 seconds 00:20:39.449 00:20:39.449 Latency(us) 00:20:39.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.449 =================================================================================================================== 00:20:39.449 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1184410 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.449 rmmod nvme_tcp 00:20:39.449 rmmod nvme_fabrics 00:20:39.449 rmmod nvme_keyring 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1184351 ']' 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1184351 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1184351 ']' 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1184351 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.449 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1184351 00:20:39.707 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:39.707 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:39.707 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1184351' 00:20:39.707 killing process with pid 1184351 00:20:39.707 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1184351 00:20:39.707 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1184351 00:20:39.966 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.966 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:39.966 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:39.966 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.966 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.966 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.966 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.966 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.869 16:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:41.869 16:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.QvugcHMDQB /tmp/tmp.oHFpGxsIsF /tmp/tmp.m1FmyOkO2m 00:20:41.869 00:20:41.869 real 1m21.764s 00:20:41.869 user 2m9.855s 00:20:41.869 sys 0m28.760s 00:20:41.869 16:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:41.869 16:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.869 ************************************ 00:20:41.869 END TEST nvmf_tls 00:20:41.869 ************************************ 00:20:41.869 16:02:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:41.869 16:02:08 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:41.869 16:02:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:41.869 16:02:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:41.869 16:02:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:41.869 ************************************ 00:20:41.869 START TEST nvmf_fips 00:20:41.869 ************************************ 00:20:41.869 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:41.869 * Looking for test storage... 00:20:42.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.128 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:42.129 Error setting digest 00:20:42.129 00A204BE957F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:42.129 00A204BE957F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:42.129 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:42.130 16:02:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.130 16:02:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:44.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:44.031 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:44.031 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:44.031 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.031 16:02:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:44.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:20:44.290 00:20:44.290 --- 10.0.0.2 ping statistics --- 00:20:44.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.290 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:20:44.290 00:20:44.290 --- 10.0.0.1 ping statistics --- 00:20:44.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.290 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1186773 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1186773 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1186773 ']' 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.290 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.290 [2024-07-15 16:02:11.176705] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:44.290 [2024-07-15 16:02:11.176794] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.290 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.548 [2024-07-15 16:02:11.240351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.548 [2024-07-15 16:02:11.347607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.548 [2024-07-15 16:02:11.347662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.549 [2024-07-15 16:02:11.347675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.549 [2024-07-15 16:02:11.347687] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.549 [2024-07-15 16:02:11.347698] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.549 [2024-07-15 16:02:11.347756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.549 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.549 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:44.549 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:44.549 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:44.549 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.806 16:02:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.806 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:44.806 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:44.806 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:44.806 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:44.806 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:44.806 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:44.807 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:44.807 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:45.065 [2024-07-15 16:02:11.768964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.065 [2024-07-15 16:02:11.784950] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.065 [2024-07-15 16:02:11.785202] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.065 [2024-07-15 16:02:11.817503] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:45.065 malloc0 00:20:45.065 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.065 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1186923 00:20:45.065 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:45.065 16:02:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1186923 /var/tmp/bdevperf.sock 00:20:45.065 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1186923 ']' 00:20:45.065 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.065 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:45.065 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.065 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:45.065 16:02:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:45.065 [2024-07-15 16:02:11.909630] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:45.065 [2024-07-15 16:02:11.909710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186923 ] 00:20:45.065 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.065 [2024-07-15 16:02:11.965797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.323 [2024-07-15 16:02:12.072871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.256 16:02:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:46.256 16:02:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:46.256 16:02:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:46.256 [2024-07-15 16:02:13.062670] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.256 [2024-07-15 16:02:13.062798] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:46.256 TLSTESTn1 00:20:46.256 16:02:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:46.513 Running I/O for 10 seconds... 00:20:56.487 00:20:56.487 Latency(us) 00:20:56.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.487 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:56.487 Verification LBA range: start 0x0 length 0x2000 00:20:56.487 TLSTESTn1 : 10.04 2542.38 9.93 0.00 0.00 50220.63 7670.14 86216.25 00:20:56.487 =================================================================================================================== 00:20:56.487 Total : 2542.38 9.93 0.00 0.00 50220.63 7670.14 86216.25 00:20:56.487 0 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:56.487 nvmf_trace.0 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1186923 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1186923 ']' 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1186923 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.487 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1186923 00:20:56.745 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:56.745 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:56.745 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1186923' 00:20:56.745 killing process with pid 1186923 00:20:56.745 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1186923 00:20:56.745 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.745 00:20:56.745 Latency(us) 00:20:56.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.745 =================================================================================================================== 00:20:56.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.745 [2024-07-15 16:02:23.438338] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:56.745 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1186923 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:57.004 rmmod nvme_tcp 00:20:57.004 rmmod nvme_fabrics 00:20:57.004 rmmod nvme_keyring 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1186773 ']' 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1186773 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1186773 ']' 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1186773 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1186773 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1186773' 00:20:57.004 killing process with pid 1186773 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1186773 00:20:57.004 [2024-07-15 16:02:23.794206] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:57.004 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1186773 00:20:57.262 16:02:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.262 16:02:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.262 16:02:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.262 16:02:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.262 16:02:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.262 16:02:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.262 16:02:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.262 16:02:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.794 16:02:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:59.794 16:02:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:59.794 00:20:59.794 real 0m17.373s 00:20:59.794 user 0m22.160s 00:20:59.794 sys 0m6.525s 00:20:59.794 16:02:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.794 16:02:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:59.794 ************************************ 00:20:59.794 END TEST nvmf_fips 00:20:59.794 ************************************ 00:20:59.794 16:02:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:59.794 16:02:26 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:59.794 16:02:26 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:59.794 16:02:26 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:59.794 16:02:26 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:59.794 16:02:26 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:59.794 16:02:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:01.172 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:01.172 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:01.172 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:01.172 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:01.172 16:02:28 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:01.172 16:02:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:01.172 16:02:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.172 16:02:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:01.172 ************************************ 00:21:01.172 START TEST nvmf_perf_adq 00:21:01.172 ************************************ 00:21:01.172 16:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:01.432 * Looking for test storage... 00:21:01.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:01.432 16:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:03.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:03.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:03.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:03.337 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:03.337 16:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:03.904 16:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:06.443 16:02:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:11.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:11.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.716 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:11.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:11.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:11.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:21:11.717 00:21:11.717 --- 10.0.0.2 ping statistics --- 00:21:11.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.717 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:21:11.717 00:21:11.717 --- 10.0.0.1 ping statistics --- 00:21:11.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.717 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1192789 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1192789 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1192789 ']' 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.717 16:02:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.717 [2024-07-15 16:02:37.994853] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:11.717 [2024-07-15 16:02:37.994980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.717 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.717 [2024-07-15 16:02:38.060449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.717 [2024-07-15 16:02:38.173029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.717 [2024-07-15 16:02:38.173086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.717 [2024-07-15 16:02:38.173114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.717 [2024-07-15 16:02:38.173125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.717 [2024-07-15 16:02:38.173135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.717 [2024-07-15 16:02:38.173268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.717 [2024-07-15 16:02:38.173331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.717 [2024-07-15 16:02:38.173400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.717 [2024-07-15 16:02:38.173403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.717 [2024-07-15 16:02:38.392837] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.717 Malloc1 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.717 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.718 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.718 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.718 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.718 [2024-07-15 16:02:38.446128] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.718 16:02:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.718 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1192822 00:21:11.718 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:11.718 16:02:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:11.718 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.621 16:02:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:13.621 16:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.621 16:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.621 16:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.621 16:02:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:13.621 "tick_rate": 2700000000, 00:21:13.621 "poll_groups": [ 00:21:13.621 { 00:21:13.621 "name": "nvmf_tgt_poll_group_000", 00:21:13.621 "admin_qpairs": 1, 00:21:13.621 "io_qpairs": 1, 00:21:13.621 "current_admin_qpairs": 1, 00:21:13.621 "current_io_qpairs": 1, 00:21:13.621 "pending_bdev_io": 0, 00:21:13.621 "completed_nvme_io": 21277, 00:21:13.621 "transports": [ 00:21:13.621 { 00:21:13.621 "trtype": "TCP" 00:21:13.621 } 00:21:13.621 ] 00:21:13.621 }, 00:21:13.621 { 00:21:13.621 "name": "nvmf_tgt_poll_group_001", 00:21:13.621 "admin_qpairs": 0, 00:21:13.621 "io_qpairs": 1, 00:21:13.621 "current_admin_qpairs": 0, 00:21:13.621 "current_io_qpairs": 1, 00:21:13.621 "pending_bdev_io": 0, 00:21:13.621 "completed_nvme_io": 20713, 00:21:13.621 "transports": [ 00:21:13.621 { 00:21:13.621 "trtype": "TCP" 00:21:13.621 } 00:21:13.621 ] 00:21:13.621 }, 00:21:13.621 { 00:21:13.621 "name": "nvmf_tgt_poll_group_002", 00:21:13.621 "admin_qpairs": 0, 00:21:13.621 "io_qpairs": 1, 00:21:13.621 "current_admin_qpairs": 0, 00:21:13.621 "current_io_qpairs": 1, 00:21:13.621 "pending_bdev_io": 0, 00:21:13.621 "completed_nvme_io": 17747, 00:21:13.621 "transports": [ 00:21:13.621 { 00:21:13.621 "trtype": "TCP" 00:21:13.621 } 00:21:13.621 ] 00:21:13.621 }, 00:21:13.621 { 00:21:13.621 "name": "nvmf_tgt_poll_group_003", 00:21:13.621 "admin_qpairs": 0, 00:21:13.621 "io_qpairs": 1, 00:21:13.621 "current_admin_qpairs": 0, 00:21:13.621 "current_io_qpairs": 1, 00:21:13.621 "pending_bdev_io": 0, 00:21:13.621 "completed_nvme_io": 19969, 00:21:13.621 "transports": [ 00:21:13.621 { 00:21:13.621 "trtype": "TCP" 00:21:13.621 } 00:21:13.621 ] 00:21:13.621 } 00:21:13.621 ] 00:21:13.621 }' 00:21:13.621 16:02:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:13.621 16:02:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:13.621 16:02:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:13.621 16:02:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:13.621 16:02:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1192822 00:21:21.744 Initializing NVMe Controllers 00:21:21.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:21.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:21.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:21.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:21.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:21.744 Initialization complete. Launching workers. 00:21:21.744 ======================================================== 00:21:21.744 Latency(us) 00:21:21.744 Device Information : IOPS MiB/s Average min max 00:21:21.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10419.60 40.70 6143.98 3715.78 8318.55 00:21:21.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10689.00 41.75 5988.51 2760.04 7603.06 00:21:21.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9208.50 35.97 6950.75 2005.38 10743.87 00:21:21.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10998.00 42.96 5819.60 1921.83 8357.75 00:21:21.744 ======================================================== 00:21:21.744 Total : 41315.09 161.39 6197.22 1921.83 10743.87 00:21:21.744 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:21.744 rmmod nvme_tcp 00:21:21.744 rmmod nvme_fabrics 00:21:21.744 rmmod nvme_keyring 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1192789 ']' 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1192789 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1192789 ']' 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1192789 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1192789 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1192789' 00:21:21.744 killing process with pid 1192789 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1192789 00:21:21.744 16:02:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1192789 00:21:22.311 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:22.311 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:22.311 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:22.311 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:22.311 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:22.311 16:02:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.311 16:02:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.311 16:02:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.217 16:02:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:24.217 16:02:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:24.217 16:02:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:24.790 16:02:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:26.766 16:02:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:32.044 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:32.044 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:32.044 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:32.045 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:32.045 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:32.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:21:32.045 00:21:32.045 --- 10.0.0.2 ping statistics --- 00:21:32.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.045 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:21:32.045 00:21:32.045 --- 10.0.0.1 ping statistics --- 00:21:32.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.045 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:32.045 net.core.busy_poll = 1 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:32.045 net.core.busy_read = 1 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1195443 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1195443 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1195443 ']' 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.045 16:02:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.045 [2024-07-15 16:02:58.909424] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:32.045 [2024-07-15 16:02:58.909517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.045 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.045 [2024-07-15 16:02:58.973456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:32.303 [2024-07-15 16:02:59.083975] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.303 [2024-07-15 16:02:59.084036] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.303 [2024-07-15 16:02:59.084064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.303 [2024-07-15 16:02:59.084076] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.303 [2024-07-15 16:02:59.084086] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.303 [2024-07-15 16:02:59.084148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.303 [2024-07-15 16:02:59.084211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.303 [2024-07-15 16:02:59.084276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:32.303 [2024-07-15 16:02:59.084279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.303 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.562 [2024-07-15 16:02:59.304907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.562 Malloc1 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.562 [2024-07-15 16:02:59.356633] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1195506 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:32.562 16:02:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:32.562 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.462 16:03:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:34.462 16:03:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.462 16:03:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:34.462 16:03:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.462 16:03:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:34.462 "tick_rate": 2700000000, 00:21:34.462 "poll_groups": [ 00:21:34.462 { 00:21:34.462 "name": "nvmf_tgt_poll_group_000", 00:21:34.462 "admin_qpairs": 1, 00:21:34.462 "io_qpairs": 2, 00:21:34.462 "current_admin_qpairs": 1, 00:21:34.462 "current_io_qpairs": 2, 00:21:34.462 "pending_bdev_io": 0, 00:21:34.462 "completed_nvme_io": 26212, 00:21:34.462 "transports": [ 00:21:34.462 { 00:21:34.462 "trtype": "TCP" 00:21:34.462 } 00:21:34.462 ] 00:21:34.462 }, 00:21:34.462 { 00:21:34.462 "name": "nvmf_tgt_poll_group_001", 00:21:34.462 "admin_qpairs": 0, 00:21:34.462 "io_qpairs": 2, 00:21:34.462 "current_admin_qpairs": 0, 00:21:34.462 "current_io_qpairs": 2, 00:21:34.462 "pending_bdev_io": 0, 00:21:34.462 "completed_nvme_io": 25977, 00:21:34.462 "transports": [ 00:21:34.462 { 00:21:34.462 "trtype": "TCP" 00:21:34.462 } 00:21:34.462 ] 00:21:34.462 }, 00:21:34.462 { 00:21:34.462 "name": "nvmf_tgt_poll_group_002", 00:21:34.462 "admin_qpairs": 0, 00:21:34.462 "io_qpairs": 0, 00:21:34.462 "current_admin_qpairs": 0, 00:21:34.462 "current_io_qpairs": 0, 00:21:34.462 "pending_bdev_io": 0, 00:21:34.462 "completed_nvme_io": 0, 00:21:34.462 "transports": [ 00:21:34.462 { 00:21:34.462 "trtype": "TCP" 00:21:34.462 } 00:21:34.462 ] 00:21:34.462 }, 00:21:34.462 { 00:21:34.462 "name": "nvmf_tgt_poll_group_003", 00:21:34.462 "admin_qpairs": 0, 00:21:34.462 "io_qpairs": 0, 00:21:34.462 "current_admin_qpairs": 0, 00:21:34.462 "current_io_qpairs": 0, 00:21:34.462 "pending_bdev_io": 0, 00:21:34.462 "completed_nvme_io": 0, 00:21:34.462 "transports": [ 00:21:34.462 { 00:21:34.462 "trtype": "TCP" 00:21:34.462 } 00:21:34.462 ] 00:21:34.462 } 00:21:34.462 ] 00:21:34.462 }' 00:21:34.462 16:03:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:34.462 16:03:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:34.720 16:03:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:34.720 16:03:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:34.720 16:03:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1195506 00:21:42.825 Initializing NVMe Controllers 00:21:42.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:42.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:42.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:42.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:42.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:42.825 Initialization complete. Launching workers. 00:21:42.825 ======================================================== 00:21:42.825 Latency(us) 00:21:42.825 Device Information : IOPS MiB/s Average min max 00:21:42.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6586.40 25.73 9750.12 1993.54 54669.46 00:21:42.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7278.90 28.43 8792.22 1607.62 54174.24 00:21:42.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6569.30 25.66 9757.05 1657.64 53777.65 00:21:42.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7248.10 28.31 8830.27 1512.40 53941.07 00:21:42.825 ======================================================== 00:21:42.825 Total : 27682.70 108.14 9259.05 1512.40 54669.46 00:21:42.825 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.825 rmmod nvme_tcp 00:21:42.825 rmmod nvme_fabrics 00:21:42.825 rmmod nvme_keyring 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1195443 ']' 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1195443 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1195443 ']' 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1195443 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1195443 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1195443' 00:21:42.825 killing process with pid 1195443 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1195443 00:21:42.825 16:03:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1195443 00:21:43.084 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.084 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.084 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.084 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.084 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.084 16:03:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.084 16:03:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.084 16:03:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.370 16:03:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:46.370 16:03:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:46.370 00:21:46.370 real 0m44.877s 00:21:46.370 user 2m35.375s 00:21:46.370 sys 0m11.063s 00:21:46.370 16:03:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:46.370 16:03:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.370 ************************************ 00:21:46.370 END TEST nvmf_perf_adq 00:21:46.370 ************************************ 00:21:46.370 16:03:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:46.370 16:03:12 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:46.370 16:03:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:46.370 16:03:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.370 16:03:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.370 ************************************ 00:21:46.370 START TEST nvmf_shutdown 00:21:46.370 ************************************ 00:21:46.370 16:03:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:46.370 * Looking for test storage... 00:21:46.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:46.371 ************************************ 00:21:46.371 START TEST nvmf_shutdown_tc1 00:21:46.371 ************************************ 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.371 16:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:48.901 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:48.901 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:48.901 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:48.901 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.901 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:48.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:21:48.901 00:21:48.902 --- 10.0.0.2 ping statistics --- 00:21:48.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.902 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:21:48.902 00:21:48.902 --- 10.0.0.1 ping statistics --- 00:21:48.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.902 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1199501 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1199501 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1199501 ']' 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:48.902 [2024-07-15 16:03:15.516223] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:48.902 [2024-07-15 16:03:15.516316] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.902 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.902 [2024-07-15 16:03:15.584164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.902 [2024-07-15 16:03:15.695420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.902 [2024-07-15 16:03:15.695477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.902 [2024-07-15 16:03:15.695505] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.902 [2024-07-15 16:03:15.695516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.902 [2024-07-15 16:03:15.695525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.902 [2024-07-15 16:03:15.695586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.902 [2024-07-15 16:03:15.695644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.902 [2024-07-15 16:03:15.695708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:48.902 [2024-07-15 16:03:15.695711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.902 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.160 [2024-07-15 16:03:15.853809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.160 16:03:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.160 Malloc1 00:21:49.160 [2024-07-15 16:03:15.942801] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.160 Malloc2 00:21:49.160 Malloc3 00:21:49.160 Malloc4 00:21:49.418 Malloc5 00:21:49.418 Malloc6 00:21:49.418 Malloc7 00:21:49.418 Malloc8 00:21:49.418 Malloc9 00:21:49.418 Malloc10 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1199565 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1199565 /var/tmp/bdevperf.sock 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1199565 ']' 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.677 { 00:21:49.677 "params": { 00:21:49.677 "name": "Nvme$subsystem", 00:21:49.677 "trtype": "$TEST_TRANSPORT", 00:21:49.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.677 "adrfam": "ipv4", 00:21:49.677 "trsvcid": "$NVMF_PORT", 00:21:49.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.677 "hdgst": ${hdgst:-false}, 00:21:49.677 "ddgst": ${ddgst:-false} 00:21:49.677 }, 00:21:49.677 "method": "bdev_nvme_attach_controller" 00:21:49.677 } 00:21:49.677 EOF 00:21:49.677 )") 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.677 { 00:21:49.677 "params": { 00:21:49.677 "name": "Nvme$subsystem", 00:21:49.677 "trtype": "$TEST_TRANSPORT", 00:21:49.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.677 "adrfam": "ipv4", 00:21:49.677 "trsvcid": "$NVMF_PORT", 00:21:49.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.677 "hdgst": ${hdgst:-false}, 00:21:49.677 "ddgst": ${ddgst:-false} 00:21:49.677 }, 00:21:49.677 "method": "bdev_nvme_attach_controller" 00:21:49.677 } 00:21:49.677 EOF 00:21:49.677 )") 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.677 { 00:21:49.677 "params": { 00:21:49.677 "name": "Nvme$subsystem", 00:21:49.677 "trtype": "$TEST_TRANSPORT", 00:21:49.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.677 "adrfam": "ipv4", 00:21:49.677 "trsvcid": "$NVMF_PORT", 00:21:49.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.677 "hdgst": ${hdgst:-false}, 00:21:49.677 "ddgst": ${ddgst:-false} 00:21:49.677 }, 00:21:49.677 "method": "bdev_nvme_attach_controller" 00:21:49.677 } 00:21:49.677 EOF 00:21:49.677 )") 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.677 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.677 { 00:21:49.677 "params": { 00:21:49.677 "name": "Nvme$subsystem", 00:21:49.677 "trtype": "$TEST_TRANSPORT", 00:21:49.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.677 "adrfam": "ipv4", 00:21:49.677 "trsvcid": "$NVMF_PORT", 00:21:49.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.677 "hdgst": ${hdgst:-false}, 00:21:49.677 "ddgst": ${ddgst:-false} 00:21:49.677 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 } 00:21:49.678 EOF 00:21:49.678 )") 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.678 { 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme$subsystem", 00:21:49.678 "trtype": "$TEST_TRANSPORT", 00:21:49.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "$NVMF_PORT", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.678 "hdgst": ${hdgst:-false}, 00:21:49.678 "ddgst": ${ddgst:-false} 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 } 00:21:49.678 EOF 00:21:49.678 )") 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.678 { 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme$subsystem", 00:21:49.678 "trtype": "$TEST_TRANSPORT", 00:21:49.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "$NVMF_PORT", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.678 "hdgst": ${hdgst:-false}, 00:21:49.678 "ddgst": ${ddgst:-false} 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 } 00:21:49.678 EOF 00:21:49.678 )") 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.678 { 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme$subsystem", 00:21:49.678 "trtype": "$TEST_TRANSPORT", 00:21:49.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "$NVMF_PORT", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.678 "hdgst": ${hdgst:-false}, 00:21:49.678 "ddgst": ${ddgst:-false} 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 } 00:21:49.678 EOF 00:21:49.678 )") 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.678 { 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme$subsystem", 00:21:49.678 "trtype": "$TEST_TRANSPORT", 00:21:49.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "$NVMF_PORT", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.678 "hdgst": ${hdgst:-false}, 00:21:49.678 "ddgst": ${ddgst:-false} 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 } 00:21:49.678 EOF 00:21:49.678 )") 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.678 { 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme$subsystem", 00:21:49.678 "trtype": "$TEST_TRANSPORT", 00:21:49.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "$NVMF_PORT", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.678 "hdgst": ${hdgst:-false}, 00:21:49.678 "ddgst": ${ddgst:-false} 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 } 00:21:49.678 EOF 00:21:49.678 )") 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.678 { 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme$subsystem", 00:21:49.678 "trtype": "$TEST_TRANSPORT", 00:21:49.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "$NVMF_PORT", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.678 "hdgst": ${hdgst:-false}, 00:21:49.678 "ddgst": ${ddgst:-false} 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 } 00:21:49.678 EOF 00:21:49.678 )") 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:49.678 16:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme1", 00:21:49.678 "trtype": "tcp", 00:21:49.678 "traddr": "10.0.0.2", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "4420", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.678 "hdgst": false, 00:21:49.678 "ddgst": false 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 },{ 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme2", 00:21:49.678 "trtype": "tcp", 00:21:49.678 "traddr": "10.0.0.2", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "4420", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:49.678 "hdgst": false, 00:21:49.678 "ddgst": false 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 },{ 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme3", 00:21:49.678 "trtype": "tcp", 00:21:49.678 "traddr": "10.0.0.2", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "4420", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:49.678 "hdgst": false, 00:21:49.678 "ddgst": false 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 },{ 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme4", 00:21:49.678 "trtype": "tcp", 00:21:49.678 "traddr": "10.0.0.2", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "4420", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:49.678 "hdgst": false, 00:21:49.678 "ddgst": false 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 },{ 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme5", 00:21:49.678 "trtype": "tcp", 00:21:49.678 "traddr": "10.0.0.2", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "4420", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:49.678 "hdgst": false, 00:21:49.678 "ddgst": false 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 },{ 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme6", 00:21:49.678 "trtype": "tcp", 00:21:49.678 "traddr": "10.0.0.2", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "4420", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:49.678 "hdgst": false, 00:21:49.678 "ddgst": false 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 },{ 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme7", 00:21:49.678 "trtype": "tcp", 00:21:49.678 "traddr": "10.0.0.2", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "4420", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:49.678 "hdgst": false, 00:21:49.678 "ddgst": false 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 },{ 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme8", 00:21:49.678 "trtype": "tcp", 00:21:49.678 "traddr": "10.0.0.2", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "4420", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:49.678 "hdgst": false, 00:21:49.678 "ddgst": false 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 },{ 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme9", 00:21:49.678 "trtype": "tcp", 00:21:49.678 "traddr": "10.0.0.2", 00:21:49.678 "adrfam": "ipv4", 00:21:49.678 "trsvcid": "4420", 00:21:49.678 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:49.678 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:49.678 "hdgst": false, 00:21:49.678 "ddgst": false 00:21:49.678 }, 00:21:49.678 "method": "bdev_nvme_attach_controller" 00:21:49.678 },{ 00:21:49.678 "params": { 00:21:49.678 "name": "Nvme10", 00:21:49.678 "trtype": "tcp", 00:21:49.678 "traddr": "10.0.0.2", 00:21:49.678 "adrfam": "ipv4", 00:21:49.679 "trsvcid": "4420", 00:21:49.679 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:49.679 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:49.679 "hdgst": false, 00:21:49.679 "ddgst": false 00:21:49.679 }, 00:21:49.679 "method": "bdev_nvme_attach_controller" 00:21:49.679 }' 00:21:49.679 [2024-07-15 16:03:16.434466] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:49.679 [2024-07-15 16:03:16.434559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:49.679 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.679 [2024-07-15 16:03:16.501230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.937 [2024-07-15 16:03:16.612436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.863 16:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.863 16:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:51.863 16:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:51.863 16:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.863 16:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.863 16:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.863 16:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1199565 00:21:51.863 16:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:51.863 16:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:52.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1199565 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1199501 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.797 { 00:21:52.797 "params": { 00:21:52.797 "name": "Nvme$subsystem", 00:21:52.797 "trtype": "$TEST_TRANSPORT", 00:21:52.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.797 "adrfam": "ipv4", 00:21:52.797 "trsvcid": "$NVMF_PORT", 00:21:52.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.797 "hdgst": ${hdgst:-false}, 00:21:52.797 "ddgst": ${ddgst:-false} 00:21:52.797 }, 00:21:52.797 "method": "bdev_nvme_attach_controller" 00:21:52.797 } 00:21:52.797 EOF 00:21:52.797 )") 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.797 { 00:21:52.797 "params": { 00:21:52.797 "name": "Nvme$subsystem", 00:21:52.797 "trtype": "$TEST_TRANSPORT", 00:21:52.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.797 "adrfam": "ipv4", 00:21:52.797 "trsvcid": "$NVMF_PORT", 00:21:52.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.797 "hdgst": ${hdgst:-false}, 00:21:52.797 "ddgst": ${ddgst:-false} 00:21:52.797 }, 00:21:52.797 "method": "bdev_nvme_attach_controller" 00:21:52.797 } 00:21:52.797 EOF 00:21:52.797 )") 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.797 { 00:21:52.797 "params": { 00:21:52.797 "name": "Nvme$subsystem", 00:21:52.797 "trtype": "$TEST_TRANSPORT", 00:21:52.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.797 "adrfam": "ipv4", 00:21:52.797 "trsvcid": "$NVMF_PORT", 00:21:52.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.797 "hdgst": ${hdgst:-false}, 00:21:52.797 "ddgst": ${ddgst:-false} 00:21:52.797 }, 00:21:52.797 "method": "bdev_nvme_attach_controller" 00:21:52.797 } 00:21:52.797 EOF 00:21:52.797 )") 00:21:52.797 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.798 { 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme$subsystem", 00:21:52.798 "trtype": "$TEST_TRANSPORT", 00:21:52.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "$NVMF_PORT", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.798 "hdgst": ${hdgst:-false}, 00:21:52.798 "ddgst": ${ddgst:-false} 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 } 00:21:52.798 EOF 00:21:52.798 )") 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.798 { 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme$subsystem", 00:21:52.798 "trtype": "$TEST_TRANSPORT", 00:21:52.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "$NVMF_PORT", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.798 "hdgst": ${hdgst:-false}, 00:21:52.798 "ddgst": ${ddgst:-false} 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 } 00:21:52.798 EOF 00:21:52.798 )") 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.798 { 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme$subsystem", 00:21:52.798 "trtype": "$TEST_TRANSPORT", 00:21:52.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "$NVMF_PORT", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.798 "hdgst": ${hdgst:-false}, 00:21:52.798 "ddgst": ${ddgst:-false} 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 } 00:21:52.798 EOF 00:21:52.798 )") 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.798 { 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme$subsystem", 00:21:52.798 "trtype": "$TEST_TRANSPORT", 00:21:52.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "$NVMF_PORT", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.798 "hdgst": ${hdgst:-false}, 00:21:52.798 "ddgst": ${ddgst:-false} 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 } 00:21:52.798 EOF 00:21:52.798 )") 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.798 { 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme$subsystem", 00:21:52.798 "trtype": "$TEST_TRANSPORT", 00:21:52.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "$NVMF_PORT", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.798 "hdgst": ${hdgst:-false}, 00:21:52.798 "ddgst": ${ddgst:-false} 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 } 00:21:52.798 EOF 00:21:52.798 )") 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.798 { 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme$subsystem", 00:21:52.798 "trtype": "$TEST_TRANSPORT", 00:21:52.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "$NVMF_PORT", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.798 "hdgst": ${hdgst:-false}, 00:21:52.798 "ddgst": ${ddgst:-false} 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 } 00:21:52.798 EOF 00:21:52.798 )") 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.798 { 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme$subsystem", 00:21:52.798 "trtype": "$TEST_TRANSPORT", 00:21:52.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "$NVMF_PORT", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.798 "hdgst": ${hdgst:-false}, 00:21:52.798 "ddgst": ${ddgst:-false} 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 } 00:21:52.798 EOF 00:21:52.798 )") 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:52.798 16:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme1", 00:21:52.798 "trtype": "tcp", 00:21:52.798 "traddr": "10.0.0.2", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "4420", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.798 "hdgst": false, 00:21:52.798 "ddgst": false 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 },{ 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme2", 00:21:52.798 "trtype": "tcp", 00:21:52.798 "traddr": "10.0.0.2", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "4420", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:52.798 "hdgst": false, 00:21:52.798 "ddgst": false 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 },{ 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme3", 00:21:52.798 "trtype": "tcp", 00:21:52.798 "traddr": "10.0.0.2", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "4420", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:52.798 "hdgst": false, 00:21:52.798 "ddgst": false 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 },{ 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme4", 00:21:52.798 "trtype": "tcp", 00:21:52.798 "traddr": "10.0.0.2", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "4420", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:52.798 "hdgst": false, 00:21:52.798 "ddgst": false 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 },{ 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme5", 00:21:52.798 "trtype": "tcp", 00:21:52.798 "traddr": "10.0.0.2", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "4420", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:52.798 "hdgst": false, 00:21:52.798 "ddgst": false 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 },{ 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme6", 00:21:52.798 "trtype": "tcp", 00:21:52.798 "traddr": "10.0.0.2", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "4420", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:52.798 "hdgst": false, 00:21:52.798 "ddgst": false 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 },{ 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme7", 00:21:52.798 "trtype": "tcp", 00:21:52.798 "traddr": "10.0.0.2", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "4420", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:52.798 "hdgst": false, 00:21:52.798 "ddgst": false 00:21:52.798 }, 00:21:52.798 "method": "bdev_nvme_attach_controller" 00:21:52.798 },{ 00:21:52.798 "params": { 00:21:52.798 "name": "Nvme8", 00:21:52.798 "trtype": "tcp", 00:21:52.798 "traddr": "10.0.0.2", 00:21:52.798 "adrfam": "ipv4", 00:21:52.798 "trsvcid": "4420", 00:21:52.798 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:52.798 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:52.798 "hdgst": false, 00:21:52.798 "ddgst": false 00:21:52.798 }, 00:21:52.799 "method": "bdev_nvme_attach_controller" 00:21:52.799 },{ 00:21:52.799 "params": { 00:21:52.799 "name": "Nvme9", 00:21:52.799 "trtype": "tcp", 00:21:52.799 "traddr": "10.0.0.2", 00:21:52.799 "adrfam": "ipv4", 00:21:52.799 "trsvcid": "4420", 00:21:52.799 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:52.799 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:52.799 "hdgst": false, 00:21:52.799 "ddgst": false 00:21:52.799 }, 00:21:52.799 "method": "bdev_nvme_attach_controller" 00:21:52.799 },{ 00:21:52.799 "params": { 00:21:52.799 "name": "Nvme10", 00:21:52.799 "trtype": "tcp", 00:21:52.799 "traddr": "10.0.0.2", 00:21:52.799 "adrfam": "ipv4", 00:21:52.799 "trsvcid": "4420", 00:21:52.799 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:52.799 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:52.799 "hdgst": false, 00:21:52.799 "ddgst": false 00:21:52.799 }, 00:21:52.799 "method": "bdev_nvme_attach_controller" 00:21:52.799 }' 00:21:52.799 [2024-07-15 16:03:19.449739] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:52.799 [2024-07-15 16:03:19.449818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199980 ] 00:21:52.799 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.799 [2024-07-15 16:03:19.514209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.799 [2024-07-15 16:03:19.624604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.172 Running I/O for 1 seconds... 00:21:55.545 00:21:55.545 Latency(us) 00:21:55.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.545 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.545 Verification LBA range: start 0x0 length 0x400 00:21:55.545 Nvme1n1 : 1.12 228.87 14.30 0.00 0.00 276881.07 22524.97 254765.13 00:21:55.545 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.545 Verification LBA range: start 0x0 length 0x400 00:21:55.545 Nvme2n1 : 1.08 237.81 14.86 0.00 0.00 261840.59 17864.63 257872.02 00:21:55.545 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.545 Verification LBA range: start 0x0 length 0x400 00:21:55.545 Nvme3n1 : 1.16 220.52 13.78 0.00 0.00 278121.81 19903.53 256318.58 00:21:55.545 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.545 Verification LBA range: start 0x0 length 0x400 00:21:55.545 Nvme4n1 : 1.07 239.59 14.97 0.00 0.00 250520.27 18641.35 253211.69 00:21:55.545 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.545 Verification LBA range: start 0x0 length 0x400 00:21:55.545 Nvme5n1 : 1.15 223.33 13.96 0.00 0.00 265453.99 22427.88 254765.13 00:21:55.545 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.545 Verification LBA range: start 0x0 length 0x400 00:21:55.545 Nvme6n1 : 1.14 228.61 14.29 0.00 0.00 248630.95 21554.06 234570.33 00:21:55.545 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.545 Verification LBA range: start 0x0 length 0x400 00:21:55.545 Nvme7n1 : 1.17 219.23 13.70 0.00 0.00 261787.88 22816.24 288940.94 00:21:55.545 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.545 Verification LBA range: start 0x0 length 0x400 00:21:55.545 Nvme8n1 : 1.18 271.60 16.97 0.00 0.00 207912.50 17767.54 262532.36 00:21:55.545 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.545 Verification LBA range: start 0x0 length 0x400 00:21:55.545 Nvme9n1 : 1.16 221.13 13.82 0.00 0.00 250343.92 24563.86 253211.69 00:21:55.545 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.545 Verification LBA range: start 0x0 length 0x400 00:21:55.545 Nvme10n1 : 1.18 274.68 17.17 0.00 0.00 198034.09 4053.52 254765.13 00:21:55.545 =================================================================================================================== 00:21:55.545 Total : 2365.37 147.84 0.00 0.00 247643.61 4053.52 288940.94 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:55.803 rmmod nvme_tcp 00:21:55.803 rmmod nvme_fabrics 00:21:55.803 rmmod nvme_keyring 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1199501 ']' 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1199501 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1199501 ']' 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1199501 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1199501 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1199501' 00:21:55.803 killing process with pid 1199501 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1199501 00:21:55.803 16:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1199501 00:21:56.370 16:03:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:56.370 16:03:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:56.370 16:03:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:56.370 16:03:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:56.370 16:03:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:56.370 16:03:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.370 16:03:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.370 16:03:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.271 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:58.271 00:21:58.271 real 0m12.068s 00:21:58.271 user 0m34.083s 00:21:58.271 sys 0m3.405s 00:21:58.271 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.271 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:58.271 ************************************ 00:21:58.271 END TEST nvmf_shutdown_tc1 00:21:58.271 ************************************ 00:21:58.271 16:03:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:58.271 16:03:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:58.271 16:03:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:58.271 16:03:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.271 16:03:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:58.529 ************************************ 00:21:58.529 START TEST nvmf_shutdown_tc2 00:21:58.529 ************************************ 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.529 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:58.530 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:58.530 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:58.530 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:58.530 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:58.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:21:58.530 00:21:58.530 --- 10.0.0.2 ping statistics --- 00:21:58.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.530 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:21:58.530 00:21:58.530 --- 10.0.0.1 ping statistics --- 00:21:58.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.530 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1200745 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1200745 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1200745 ']' 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.530 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.530 [2024-07-15 16:03:25.445630] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:58.530 [2024-07-15 16:03:25.445731] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.788 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.788 [2024-07-15 16:03:25.513475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.788 [2024-07-15 16:03:25.623472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.788 [2024-07-15 16:03:25.623523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.788 [2024-07-15 16:03:25.623552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.788 [2024-07-15 16:03:25.623563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.788 [2024-07-15 16:03:25.623572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.788 [2024-07-15 16:03:25.623621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.788 [2024-07-15 16:03:25.623683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.788 [2024-07-15 16:03:25.623749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:58.788 [2024-07-15 16:03:25.623752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.047 [2024-07-15 16:03:25.777767] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.047 16:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.047 Malloc1 00:21:59.047 [2024-07-15 16:03:25.867043] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.047 Malloc2 00:21:59.047 Malloc3 00:21:59.304 Malloc4 00:21:59.304 Malloc5 00:21:59.304 Malloc6 00:21:59.304 Malloc7 00:21:59.304 Malloc8 00:21:59.563 Malloc9 00:21:59.563 Malloc10 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1200923 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1200923 /var/tmp/bdevperf.sock 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1200923 ']' 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:59.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.563 { 00:21:59.563 "params": { 00:21:59.563 "name": "Nvme$subsystem", 00:21:59.563 "trtype": "$TEST_TRANSPORT", 00:21:59.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.563 "adrfam": "ipv4", 00:21:59.563 "trsvcid": "$NVMF_PORT", 00:21:59.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.563 "hdgst": ${hdgst:-false}, 00:21:59.563 "ddgst": ${ddgst:-false} 00:21:59.563 }, 00:21:59.563 "method": "bdev_nvme_attach_controller" 00:21:59.563 } 00:21:59.563 EOF 00:21:59.563 )") 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.563 { 00:21:59.563 "params": { 00:21:59.563 "name": "Nvme$subsystem", 00:21:59.563 "trtype": "$TEST_TRANSPORT", 00:21:59.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.563 "adrfam": "ipv4", 00:21:59.563 "trsvcid": "$NVMF_PORT", 00:21:59.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.563 "hdgst": ${hdgst:-false}, 00:21:59.563 "ddgst": ${ddgst:-false} 00:21:59.563 }, 00:21:59.563 "method": "bdev_nvme_attach_controller" 00:21:59.563 } 00:21:59.563 EOF 00:21:59.563 )") 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.563 { 00:21:59.563 "params": { 00:21:59.563 "name": "Nvme$subsystem", 00:21:59.563 "trtype": "$TEST_TRANSPORT", 00:21:59.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.563 "adrfam": "ipv4", 00:21:59.563 "trsvcid": "$NVMF_PORT", 00:21:59.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.563 "hdgst": ${hdgst:-false}, 00:21:59.563 "ddgst": ${ddgst:-false} 00:21:59.563 }, 00:21:59.563 "method": "bdev_nvme_attach_controller" 00:21:59.563 } 00:21:59.563 EOF 00:21:59.563 )") 00:21:59.563 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.564 { 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme$subsystem", 00:21:59.564 "trtype": "$TEST_TRANSPORT", 00:21:59.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "$NVMF_PORT", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.564 "hdgst": ${hdgst:-false}, 00:21:59.564 "ddgst": ${ddgst:-false} 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 } 00:21:59.564 EOF 00:21:59.564 )") 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.564 { 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme$subsystem", 00:21:59.564 "trtype": "$TEST_TRANSPORT", 00:21:59.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "$NVMF_PORT", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.564 "hdgst": ${hdgst:-false}, 00:21:59.564 "ddgst": ${ddgst:-false} 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 } 00:21:59.564 EOF 00:21:59.564 )") 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.564 { 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme$subsystem", 00:21:59.564 "trtype": "$TEST_TRANSPORT", 00:21:59.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "$NVMF_PORT", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.564 "hdgst": ${hdgst:-false}, 00:21:59.564 "ddgst": ${ddgst:-false} 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 } 00:21:59.564 EOF 00:21:59.564 )") 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.564 { 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme$subsystem", 00:21:59.564 "trtype": "$TEST_TRANSPORT", 00:21:59.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "$NVMF_PORT", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.564 "hdgst": ${hdgst:-false}, 00:21:59.564 "ddgst": ${ddgst:-false} 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 } 00:21:59.564 EOF 00:21:59.564 )") 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.564 { 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme$subsystem", 00:21:59.564 "trtype": "$TEST_TRANSPORT", 00:21:59.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "$NVMF_PORT", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.564 "hdgst": ${hdgst:-false}, 00:21:59.564 "ddgst": ${ddgst:-false} 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 } 00:21:59.564 EOF 00:21:59.564 )") 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.564 { 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme$subsystem", 00:21:59.564 "trtype": "$TEST_TRANSPORT", 00:21:59.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "$NVMF_PORT", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.564 "hdgst": ${hdgst:-false}, 00:21:59.564 "ddgst": ${ddgst:-false} 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 } 00:21:59.564 EOF 00:21:59.564 )") 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.564 { 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme$subsystem", 00:21:59.564 "trtype": "$TEST_TRANSPORT", 00:21:59.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "$NVMF_PORT", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.564 "hdgst": ${hdgst:-false}, 00:21:59.564 "ddgst": ${ddgst:-false} 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 } 00:21:59.564 EOF 00:21:59.564 )") 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:59.564 16:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme1", 00:21:59.564 "trtype": "tcp", 00:21:59.564 "traddr": "10.0.0.2", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "4420", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.564 "hdgst": false, 00:21:59.564 "ddgst": false 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 },{ 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme2", 00:21:59.564 "trtype": "tcp", 00:21:59.564 "traddr": "10.0.0.2", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "4420", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:59.564 "hdgst": false, 00:21:59.564 "ddgst": false 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 },{ 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme3", 00:21:59.564 "trtype": "tcp", 00:21:59.564 "traddr": "10.0.0.2", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "4420", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:59.564 "hdgst": false, 00:21:59.564 "ddgst": false 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 },{ 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme4", 00:21:59.564 "trtype": "tcp", 00:21:59.564 "traddr": "10.0.0.2", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "4420", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:59.564 "hdgst": false, 00:21:59.564 "ddgst": false 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 },{ 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme5", 00:21:59.564 "trtype": "tcp", 00:21:59.564 "traddr": "10.0.0.2", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "4420", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:59.564 "hdgst": false, 00:21:59.564 "ddgst": false 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 },{ 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme6", 00:21:59.564 "trtype": "tcp", 00:21:59.564 "traddr": "10.0.0.2", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "4420", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:59.564 "hdgst": false, 00:21:59.564 "ddgst": false 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 },{ 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme7", 00:21:59.564 "trtype": "tcp", 00:21:59.564 "traddr": "10.0.0.2", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "4420", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:59.564 "hdgst": false, 00:21:59.564 "ddgst": false 00:21:59.564 }, 00:21:59.564 "method": "bdev_nvme_attach_controller" 00:21:59.564 },{ 00:21:59.564 "params": { 00:21:59.564 "name": "Nvme8", 00:21:59.564 "trtype": "tcp", 00:21:59.564 "traddr": "10.0.0.2", 00:21:59.564 "adrfam": "ipv4", 00:21:59.564 "trsvcid": "4420", 00:21:59.564 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:59.564 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:59.564 "hdgst": false, 00:21:59.564 "ddgst": false 00:21:59.565 }, 00:21:59.565 "method": "bdev_nvme_attach_controller" 00:21:59.565 },{ 00:21:59.565 "params": { 00:21:59.565 "name": "Nvme9", 00:21:59.565 "trtype": "tcp", 00:21:59.565 "traddr": "10.0.0.2", 00:21:59.565 "adrfam": "ipv4", 00:21:59.565 "trsvcid": "4420", 00:21:59.565 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:59.565 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:59.565 "hdgst": false, 00:21:59.565 "ddgst": false 00:21:59.565 }, 00:21:59.565 "method": "bdev_nvme_attach_controller" 00:21:59.565 },{ 00:21:59.565 "params": { 00:21:59.565 "name": "Nvme10", 00:21:59.565 "trtype": "tcp", 00:21:59.565 "traddr": "10.0.0.2", 00:21:59.565 "adrfam": "ipv4", 00:21:59.565 "trsvcid": "4420", 00:21:59.565 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:59.565 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:59.565 "hdgst": false, 00:21:59.565 "ddgst": false 00:21:59.565 }, 00:21:59.565 "method": "bdev_nvme_attach_controller" 00:21:59.565 }' 00:21:59.565 [2024-07-15 16:03:26.377980] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:59.565 [2024-07-15 16:03:26.378059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200923 ] 00:21:59.565 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.565 [2024-07-15 16:03:26.442545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.822 [2024-07-15 16:03:26.553752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.193 Running I/O for 10 seconds... 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.449 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.707 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.707 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:01.707 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:01.707 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:01.965 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:01.965 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:01.965 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.965 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.965 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.965 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.965 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.965 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:01.965 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:01.965 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1200923 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1200923 ']' 00:22:02.224 16:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1200923 00:22:02.224 16:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:22:02.224 16:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.224 16:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1200923 00:22:02.224 16:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:02.224 16:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:02.224 16:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1200923' 00:22:02.224 killing process with pid 1200923 00:22:02.224 16:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1200923 00:22:02.224 16:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1200923 00:22:02.224 Received shutdown signal, test time was about 1.011995 seconds 00:22:02.224 00:22:02.224 Latency(us) 00:22:02.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.224 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.224 Verification LBA range: start 0x0 length 0x400 00:22:02.224 Nvme1n1 : 1.01 254.07 15.88 0.00 0.00 249088.00 19903.53 239230.67 00:22:02.224 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.224 Verification LBA range: start 0x0 length 0x400 00:22:02.224 Nvme2n1 : 1.00 255.52 15.97 0.00 0.00 243127.18 40777.96 250104.79 00:22:02.224 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.224 Verification LBA range: start 0x0 length 0x400 00:22:02.224 Nvme3n1 : 0.99 259.35 16.21 0.00 0.00 234885.12 20291.89 250104.79 00:22:02.224 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.224 Verification LBA range: start 0x0 length 0x400 00:22:02.224 Nvme4n1 : 1.00 261.43 16.34 0.00 0.00 228324.51 2585.03 248551.35 00:22:02.224 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.224 Verification LBA range: start 0x0 length 0x400 00:22:02.224 Nvme5n1 : 0.97 197.06 12.32 0.00 0.00 296901.34 22816.24 256318.58 00:22:02.224 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.224 Verification LBA range: start 0x0 length 0x400 00:22:02.224 Nvme6n1 : 0.98 196.45 12.28 0.00 0.00 291851.63 21845.33 274959.93 00:22:02.224 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.224 Verification LBA range: start 0x0 length 0x400 00:22:02.224 Nvme7n1 : 1.01 253.17 15.82 0.00 0.00 222807.80 22136.60 259425.47 00:22:02.224 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.224 Verification LBA range: start 0x0 length 0x400 00:22:02.224 Nvme8n1 : 0.99 257.58 16.10 0.00 0.00 213986.80 23884.23 253211.69 00:22:02.224 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.224 Verification LBA range: start 0x0 length 0x400 00:22:02.224 Nvme9n1 : 0.97 198.24 12.39 0.00 0.00 270926.63 20874.43 256318.58 00:22:02.224 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.224 Verification LBA range: start 0x0 length 0x400 00:22:02.224 Nvme10n1 : 0.98 195.81 12.24 0.00 0.00 269087.73 20680.25 299815.06 00:22:02.224 =================================================================================================================== 00:22:02.224 Total : 2328.67 145.54 0.00 0.00 248710.75 2585.03 299815.06 00:22:02.789 16:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1200745 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:03.723 rmmod nvme_tcp 00:22:03.723 rmmod nvme_fabrics 00:22:03.723 rmmod nvme_keyring 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1200745 ']' 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1200745 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1200745 ']' 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1200745 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1200745 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1200745' 00:22:03.723 killing process with pid 1200745 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1200745 00:22:03.723 16:03:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1200745 00:22:04.289 16:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:04.289 16:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:04.289 16:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:04.289 16:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:04.289 16:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:04.289 16:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.289 16:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.289 16:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.194 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:06.194 00:22:06.194 real 0m7.882s 00:22:06.194 user 0m23.765s 00:22:06.194 sys 0m1.606s 00:22:06.194 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:06.194 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.194 ************************************ 00:22:06.194 END TEST nvmf_shutdown_tc2 00:22:06.194 ************************************ 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:06.454 ************************************ 00:22:06.454 START TEST nvmf_shutdown_tc3 00:22:06.454 ************************************ 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:06.454 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:06.454 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:06.454 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:06.454 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.454 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:06.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:22:06.455 00:22:06.455 --- 10.0.0.2 ping statistics --- 00:22:06.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.455 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:22:06.455 00:22:06.455 --- 10.0.0.1 ping statistics --- 00:22:06.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.455 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1201838 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1201838 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1201838 ']' 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.455 16:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.712 [2024-07-15 16:03:33.385468] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:06.712 [2024-07-15 16:03:33.385553] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.712 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.712 [2024-07-15 16:03:33.458797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.712 [2024-07-15 16:03:33.580040] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.712 [2024-07-15 16:03:33.580092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.712 [2024-07-15 16:03:33.580108] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.712 [2024-07-15 16:03:33.580121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.712 [2024-07-15 16:03:33.580133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.712 [2024-07-15 16:03:33.580247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.712 [2024-07-15 16:03:33.580336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.712 [2024-07-15 16:03:33.580410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:06.712 [2024-07-15 16:03:33.580412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.682 [2024-07-15 16:03:34.338543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.682 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.682 Malloc1 00:22:07.682 [2024-07-15 16:03:34.425713] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.682 Malloc2 00:22:07.682 Malloc3 00:22:07.682 Malloc4 00:22:07.942 Malloc5 00:22:07.942 Malloc6 00:22:07.942 Malloc7 00:22:07.942 Malloc8 00:22:07.942 Malloc9 00:22:07.942 Malloc10 00:22:07.942 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.942 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:07.942 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.942 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1202146 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1202146 /var/tmp/bdevperf.sock 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1202146 ']' 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.200 { 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme$subsystem", 00:22:08.200 "trtype": "$TEST_TRANSPORT", 00:22:08.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.200 "adrfam": "ipv4", 00:22:08.200 "trsvcid": "$NVMF_PORT", 00:22:08.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.200 "hdgst": ${hdgst:-false}, 00:22:08.200 "ddgst": ${ddgst:-false} 00:22:08.200 }, 00:22:08.200 "method": "bdev_nvme_attach_controller" 00:22:08.200 } 00:22:08.200 EOF 00:22:08.200 )") 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.200 { 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme$subsystem", 00:22:08.200 "trtype": "$TEST_TRANSPORT", 00:22:08.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.200 "adrfam": "ipv4", 00:22:08.200 "trsvcid": "$NVMF_PORT", 00:22:08.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.200 "hdgst": ${hdgst:-false}, 00:22:08.200 "ddgst": ${ddgst:-false} 00:22:08.200 }, 00:22:08.200 "method": "bdev_nvme_attach_controller" 00:22:08.200 } 00:22:08.200 EOF 00:22:08.200 )") 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.200 { 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme$subsystem", 00:22:08.200 "trtype": "$TEST_TRANSPORT", 00:22:08.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.200 "adrfam": "ipv4", 00:22:08.200 "trsvcid": "$NVMF_PORT", 00:22:08.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.200 "hdgst": ${hdgst:-false}, 00:22:08.200 "ddgst": ${ddgst:-false} 00:22:08.200 }, 00:22:08.200 "method": "bdev_nvme_attach_controller" 00:22:08.200 } 00:22:08.200 EOF 00:22:08.200 )") 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.200 { 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme$subsystem", 00:22:08.200 "trtype": "$TEST_TRANSPORT", 00:22:08.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.200 "adrfam": "ipv4", 00:22:08.200 "trsvcid": "$NVMF_PORT", 00:22:08.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.200 "hdgst": ${hdgst:-false}, 00:22:08.200 "ddgst": ${ddgst:-false} 00:22:08.200 }, 00:22:08.200 "method": "bdev_nvme_attach_controller" 00:22:08.200 } 00:22:08.200 EOF 00:22:08.200 )") 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.200 { 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme$subsystem", 00:22:08.200 "trtype": "$TEST_TRANSPORT", 00:22:08.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.200 "adrfam": "ipv4", 00:22:08.200 "trsvcid": "$NVMF_PORT", 00:22:08.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.200 "hdgst": ${hdgst:-false}, 00:22:08.200 "ddgst": ${ddgst:-false} 00:22:08.200 }, 00:22:08.200 "method": "bdev_nvme_attach_controller" 00:22:08.200 } 00:22:08.200 EOF 00:22:08.200 )") 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.200 { 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme$subsystem", 00:22:08.200 "trtype": "$TEST_TRANSPORT", 00:22:08.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.200 "adrfam": "ipv4", 00:22:08.200 "trsvcid": "$NVMF_PORT", 00:22:08.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.200 "hdgst": ${hdgst:-false}, 00:22:08.200 "ddgst": ${ddgst:-false} 00:22:08.200 }, 00:22:08.200 "method": "bdev_nvme_attach_controller" 00:22:08.200 } 00:22:08.200 EOF 00:22:08.200 )") 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.200 { 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme$subsystem", 00:22:08.200 "trtype": "$TEST_TRANSPORT", 00:22:08.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.200 "adrfam": "ipv4", 00:22:08.200 "trsvcid": "$NVMF_PORT", 00:22:08.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.200 "hdgst": ${hdgst:-false}, 00:22:08.200 "ddgst": ${ddgst:-false} 00:22:08.200 }, 00:22:08.200 "method": "bdev_nvme_attach_controller" 00:22:08.200 } 00:22:08.200 EOF 00:22:08.200 )") 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.200 { 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme$subsystem", 00:22:08.200 "trtype": "$TEST_TRANSPORT", 00:22:08.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.200 "adrfam": "ipv4", 00:22:08.200 "trsvcid": "$NVMF_PORT", 00:22:08.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.200 "hdgst": ${hdgst:-false}, 00:22:08.200 "ddgst": ${ddgst:-false} 00:22:08.200 }, 00:22:08.200 "method": "bdev_nvme_attach_controller" 00:22:08.200 } 00:22:08.200 EOF 00:22:08.200 )") 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.200 { 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme$subsystem", 00:22:08.200 "trtype": "$TEST_TRANSPORT", 00:22:08.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.200 "adrfam": "ipv4", 00:22:08.200 "trsvcid": "$NVMF_PORT", 00:22:08.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.200 "hdgst": ${hdgst:-false}, 00:22:08.200 "ddgst": ${ddgst:-false} 00:22:08.200 }, 00:22:08.200 "method": "bdev_nvme_attach_controller" 00:22:08.200 } 00:22:08.200 EOF 00:22:08.200 )") 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.200 { 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme$subsystem", 00:22:08.200 "trtype": "$TEST_TRANSPORT", 00:22:08.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.200 "adrfam": "ipv4", 00:22:08.200 "trsvcid": "$NVMF_PORT", 00:22:08.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.200 "hdgst": ${hdgst:-false}, 00:22:08.200 "ddgst": ${ddgst:-false} 00:22:08.200 }, 00:22:08.200 "method": "bdev_nvme_attach_controller" 00:22:08.200 } 00:22:08.200 EOF 00:22:08.200 )") 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:08.200 16:03:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme1", 00:22:08.200 "trtype": "tcp", 00:22:08.200 "traddr": "10.0.0.2", 00:22:08.200 "adrfam": "ipv4", 00:22:08.200 "trsvcid": "4420", 00:22:08.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.200 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:08.200 "hdgst": false, 00:22:08.200 "ddgst": false 00:22:08.200 }, 00:22:08.200 "method": "bdev_nvme_attach_controller" 00:22:08.200 },{ 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme2", 00:22:08.200 "trtype": "tcp", 00:22:08.200 "traddr": "10.0.0.2", 00:22:08.200 "adrfam": "ipv4", 00:22:08.200 "trsvcid": "4420", 00:22:08.200 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:08.200 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:08.200 "hdgst": false, 00:22:08.200 "ddgst": false 00:22:08.200 }, 00:22:08.200 "method": "bdev_nvme_attach_controller" 00:22:08.200 },{ 00:22:08.200 "params": { 00:22:08.200 "name": "Nvme3", 00:22:08.201 "trtype": "tcp", 00:22:08.201 "traddr": "10.0.0.2", 00:22:08.201 "adrfam": "ipv4", 00:22:08.201 "trsvcid": "4420", 00:22:08.201 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:08.201 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:08.201 "hdgst": false, 00:22:08.201 "ddgst": false 00:22:08.201 }, 00:22:08.201 "method": "bdev_nvme_attach_controller" 00:22:08.201 },{ 00:22:08.201 "params": { 00:22:08.201 "name": "Nvme4", 00:22:08.201 "trtype": "tcp", 00:22:08.201 "traddr": "10.0.0.2", 00:22:08.201 "adrfam": "ipv4", 00:22:08.201 "trsvcid": "4420", 00:22:08.201 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:08.201 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:08.201 "hdgst": false, 00:22:08.201 "ddgst": false 00:22:08.201 }, 00:22:08.201 "method": "bdev_nvme_attach_controller" 00:22:08.201 },{ 00:22:08.201 "params": { 00:22:08.201 "name": "Nvme5", 00:22:08.201 "trtype": "tcp", 00:22:08.201 "traddr": "10.0.0.2", 00:22:08.201 "adrfam": "ipv4", 00:22:08.201 "trsvcid": "4420", 00:22:08.201 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:08.201 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:08.201 "hdgst": false, 00:22:08.201 "ddgst": false 00:22:08.201 }, 00:22:08.201 "method": "bdev_nvme_attach_controller" 00:22:08.201 },{ 00:22:08.201 "params": { 00:22:08.201 "name": "Nvme6", 00:22:08.201 "trtype": "tcp", 00:22:08.201 "traddr": "10.0.0.2", 00:22:08.201 "adrfam": "ipv4", 00:22:08.201 "trsvcid": "4420", 00:22:08.201 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:08.201 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:08.201 "hdgst": false, 00:22:08.201 "ddgst": false 00:22:08.201 }, 00:22:08.201 "method": "bdev_nvme_attach_controller" 00:22:08.201 },{ 00:22:08.201 "params": { 00:22:08.201 "name": "Nvme7", 00:22:08.201 "trtype": "tcp", 00:22:08.201 "traddr": "10.0.0.2", 00:22:08.201 "adrfam": "ipv4", 00:22:08.201 "trsvcid": "4420", 00:22:08.201 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:08.201 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:08.201 "hdgst": false, 00:22:08.201 "ddgst": false 00:22:08.201 }, 00:22:08.201 "method": "bdev_nvme_attach_controller" 00:22:08.201 },{ 00:22:08.201 "params": { 00:22:08.201 "name": "Nvme8", 00:22:08.201 "trtype": "tcp", 00:22:08.201 "traddr": "10.0.0.2", 00:22:08.201 "adrfam": "ipv4", 00:22:08.201 "trsvcid": "4420", 00:22:08.201 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:08.201 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:08.201 "hdgst": false, 00:22:08.201 "ddgst": false 00:22:08.201 }, 00:22:08.201 "method": "bdev_nvme_attach_controller" 00:22:08.201 },{ 00:22:08.201 "params": { 00:22:08.201 "name": "Nvme9", 00:22:08.201 "trtype": "tcp", 00:22:08.201 "traddr": "10.0.0.2", 00:22:08.201 "adrfam": "ipv4", 00:22:08.201 "trsvcid": "4420", 00:22:08.201 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:08.201 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:08.201 "hdgst": false, 00:22:08.201 "ddgst": false 00:22:08.201 }, 00:22:08.201 "method": "bdev_nvme_attach_controller" 00:22:08.201 },{ 00:22:08.201 "params": { 00:22:08.201 "name": "Nvme10", 00:22:08.201 "trtype": "tcp", 00:22:08.201 "traddr": "10.0.0.2", 00:22:08.201 "adrfam": "ipv4", 00:22:08.201 "trsvcid": "4420", 00:22:08.201 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:08.201 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:08.201 "hdgst": false, 00:22:08.201 "ddgst": false 00:22:08.201 }, 00:22:08.201 "method": "bdev_nvme_attach_controller" 00:22:08.201 }' 00:22:08.201 [2024-07-15 16:03:34.936728] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:08.201 [2024-07-15 16:03:34.936804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202146 ] 00:22:08.201 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.201 [2024-07-15 16:03:34.999621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.201 [2024-07-15 16:03:35.109991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.100 Running I/O for 10 seconds... 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:10.358 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:10.616 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:10.616 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:10.616 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:10.616 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.616 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:10.616 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.616 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.616 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:10.616 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:10.616 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1201838 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1201838 ']' 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1201838 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1201838 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1201838' 00:22:10.882 killing process with pid 1201838 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1201838 00:22:10.882 16:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1201838 00:22:10.882 [2024-07-15 16:03:37.757729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.757995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.882 [2024-07-15 16:03:37.758401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.758628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09a80 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.760834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c480 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.762210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.762235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.762253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.762266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.762279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.762291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.762303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.883 [2024-07-15 16:03:37.762315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762327] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.762985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.763006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.763017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09f20 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764681] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.884 [2024-07-15 16:03:37.764954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.764966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.764978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.764991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.765296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a3c0 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.766999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.885 [2024-07-15 16:03:37.767156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.767168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.767180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.767192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.767203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a880 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.767916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.767942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.767961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.767974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.767985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.767998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.886 [2024-07-15 16:03:37.768354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.768701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ad20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.769670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.769713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.769732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.769746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.769760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.769773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.769787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.769800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.769813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e8c60 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.769887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.769909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.769927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.769940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.769954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.769967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.769980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.769993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15793c0 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.770080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1592240 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.770310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb2e20 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.770471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9280 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.770636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.887 [2024-07-15 16:03:37.770739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.770752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6830 is same with the state(5) to be set 00:22:10.887 [2024-07-15 16:03:37.773474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.887 [2024-07-15 16:03:37.773506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.773537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.887 [2024-07-15 16:03:37.773552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.887 [2024-07-15 16:03:37.773570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bb20 is same with the state(5) to be set 00:22:10.888 [2024-07-15 16:03:37.773754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.773974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.773988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.888 [2024-07-15 16:03:37.774497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with [2024-07-15 16:03:37.774511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:10.888 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.888 [2024-07-15 16:03:37.774529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.888 [2024-07-15 16:03:37.774544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.888 [2024-07-15 16:03:37.774561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.888 [2024-07-15 16:03:37.774576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:03:37.774577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 the state(5) to be set 00:22:10.888 [2024-07-15 16:03:37.774592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.888 [2024-07-15 16:03:37.774594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.888 [2024-07-15 16:03:37.774608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.888 [2024-07-15 16:03:37.774624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.888 [2024-07-15 16:03:37.774639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.888 [2024-07-15 16:03:37.774643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.888 [2024-07-15 16:03:37.774656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with [2024-07-15 16:03:37.774656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1the state(5) to be set 00:22:10.888 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.888 [2024-07-15 16:03:37.774670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with [2024-07-15 16:03:37.774671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:10.888 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.774684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.774701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.774715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.774728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.774741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.774754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.774767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with [2024-07-15 16:03:37.774780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1the state(5) to be set 00:22:10.889 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.774795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with [2024-07-15 16:03:37.774796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:10.889 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.774809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.774823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.774836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.774849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.774862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.774889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.774907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.774940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.774953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-07-15 16:03:37.774965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:03:37.774980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.774998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.775008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.775020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.775033] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.775046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with [2024-07-15 16:03:37.775058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(5) to be set 00:22:10.889 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.775072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.775084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.775097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:03:37.775110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.775137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.775149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.775162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with [2024-07-15 16:03:37.775195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:10.889 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.775209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.775221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.775233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-07-15 16:03:37.775245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with [2024-07-15 16:03:37.775260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:10.889 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.775273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.775286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.775299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.775314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 [2024-07-15 16:03:37.775328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.889 [2024-07-15 16:03:37.775340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.889 [2024-07-15 16:03:37.775351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:03:37.775352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.889 the state(5) to be set 00:22:10.890 [2024-07-15 16:03:37.775365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0bfc0 is same with the state(5) to be set 00:22:10.890 [2024-07-15 16:03:37.775368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.775396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.775441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.775471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.775500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.775530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.775656] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13c2000 was disconnected and freed. reset controller. 00:22:10.890 [2024-07-15 16:03:37.775748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.775788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.775825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.775855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.775899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.775941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.775972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.775987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.890 [2024-07-15 16:03:37.776867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.890 [2024-07-15 16:03:37.776905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.776929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.776945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.776958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.776974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.776988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.777799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.777907] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1461d30 was disconnected and freed. reset controller. 00:22:10.891 [2024-07-15 16:03:37.778002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.778022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.778041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.778057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.778072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.778086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.778101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.778115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.778130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.778154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.778170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.891 [2024-07-15 16:03:37.778184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.891 [2024-07-15 16:03:37.778199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.778920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.778940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.794703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.794770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.794788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.794804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.794820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.794834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.794850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.794865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.794889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.794905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.794932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.794946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.794962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.794976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.795005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.795020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.795036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.795050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.795066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.795080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.795095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.795108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.795124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.795138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.795153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.795167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.795182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.795196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.795211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.795225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.795241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.795257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.795274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.795288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.795303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.795319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.892 [2024-07-15 16:03:37.795336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.892 [2024-07-15 16:03:37.795350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.893 [2024-07-15 16:03:37.795812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.795973] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1463140 was disconnected and freed. reset controller. 00:22:10.893 [2024-07-15 16:03:37.796335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e8c60 (9): Bad file descriptor 00:22:10.893 [2024-07-15 16:03:37.796409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f2620 is same with the state(5) to be set 00:22:10.893 [2024-07-15 16:03:37.796554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15793c0 (9): Bad file descriptor 00:22:10.893 [2024-07-15 16:03:37.796606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1591b30 is same with the state(5) to be set 00:22:10.893 [2024-07-15 16:03:37.796765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1587210 is same with the state(5) to be set 00:22:10.893 [2024-07-15 16:03:37.796957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.796978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.796993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.797006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.797021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.797035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.797048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.893 [2024-07-15 16:03:37.797061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.893 [2024-07-15 16:03:37.797075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ab10 is same with the state(5) to be set 00:22:10.893 [2024-07-15 16:03:37.797097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1592240 (9): Bad file descriptor 00:22:10.893 [2024-07-15 16:03:37.797128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb2e20 (9): Bad file descriptor 00:22:10.893 [2024-07-15 16:03:37.797158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e9280 (9): Bad file descriptor 00:22:10.893 [2024-07-15 16:03:37.797189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6830 (9): Bad file descriptor 00:22:10.893 [2024-07-15 16:03:37.801025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:10.893 [2024-07-15 16:03:37.801078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:10.893 [2024-07-15 16:03:37.801103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148ab10 (9): Bad file descriptor 00:22:10.893 [2024-07-15 16:03:37.801126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f2620 (9): Bad file descriptor 00:22:10.893 [2024-07-15 16:03:37.801611] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.893 [2024-07-15 16:03:37.801652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:10.893 [2024-07-15 16:03:37.801688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1591b30 (9): Bad file descriptor 00:22:10.893 [2024-07-15 16:03:37.801784] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.893 [2024-07-15 16:03:37.801852] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.893 [2024-07-15 16:03:37.801925] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.893 [2024-07-15 16:03:37.801994] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.893 [2024-07-15 16:03:37.802064] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.893 [2024-07-15 16:03:37.803152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.893 [2024-07-15 16:03:37.803183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2620 with addr=10.0.0.2, port=4420 00:22:10.893 [2024-07-15 16:03:37.803202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f2620 is same with the state(5) to be set 00:22:10.893 [2024-07-15 16:03:37.803340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.894 [2024-07-15 16:03:37.803366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148ab10 with addr=10.0.0.2, port=4420 00:22:10.894 [2024-07-15 16:03:37.803382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ab10 is same with the state(5) to be set 00:22:10.894 [2024-07-15 16:03:37.803539] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.894 [2024-07-15 16:03:37.803696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.894 [2024-07-15 16:03:37.803722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1591b30 with addr=10.0.0.2, port=4420 00:22:10.894 [2024-07-15 16:03:37.803739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1591b30 is same with the state(5) to be set 00:22:10.894 [2024-07-15 16:03:37.803758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f2620 (9): Bad file descriptor 00:22:10.894 [2024-07-15 16:03:37.803778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148ab10 (9): Bad file descriptor 00:22:10.894 [2024-07-15 16:03:37.803874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1591b30 (9): Bad file descriptor 00:22:10.894 [2024-07-15 16:03:37.803911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:10.894 [2024-07-15 16:03:37.803925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:10.894 [2024-07-15 16:03:37.803943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:10.894 [2024-07-15 16:03:37.803966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:10.894 [2024-07-15 16:03:37.803981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:10.894 [2024-07-15 16:03:37.803995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:10.894 [2024-07-15 16:03:37.804052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.894 [2024-07-15 16:03:37.804072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.894 [2024-07-15 16:03:37.804085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:10.894 [2024-07-15 16:03:37.804098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:10.894 [2024-07-15 16:03:37.804111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:10.894 [2024-07-15 16:03:37.804167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.160 [2024-07-15 16:03:37.806288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1587210 (9): Bad file descriptor 00:22:11.160 [2024-07-15 16:03:37.806479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.160 [2024-07-15 16:03:37.806507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.160 [2024-07-15 16:03:37.806539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.160 [2024-07-15 16:03:37.806555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.160 [2024-07-15 16:03:37.806573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.160 [2024-07-15 16:03:37.806588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.160 [2024-07-15 16:03:37.806604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.160 [2024-07-15 16:03:37.806618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.160 [2024-07-15 16:03:37.806635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.160 [2024-07-15 16:03:37.806649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.160 [2024-07-15 16:03:37.806665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.160 [2024-07-15 16:03:37.806679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.160 [2024-07-15 16:03:37.806695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.160 [2024-07-15 16:03:37.806709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.160 [2024-07-15 16:03:37.806725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.160 [2024-07-15 16:03:37.806740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.160 [2024-07-15 16:03:37.806756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.160 [2024-07-15 16:03:37.806770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.160 [2024-07-15 16:03:37.806786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.160 [2024-07-15 16:03:37.806800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.806816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.806831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.806847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.806860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.806887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.806909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.806936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.806951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.806967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.806982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.806997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.807970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.807986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.808000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.808016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.808029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.808045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.808059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.808079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.161 [2024-07-15 16:03:37.808093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.161 [2024-07-15 16:03:37.808110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.808490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.808505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1540ae0 is same with the state(5) to be set 00:22:11.162 [2024-07-15 16:03:37.809827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.809852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.809872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.809896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.809921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.809937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.809953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.809968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.809984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.809998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.162 [2024-07-15 16:03:37.810640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.162 [2024-07-15 16:03:37.810654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.810670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.810684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.810699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.810713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.810729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.810744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.810761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.810775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.810791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.810805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.810821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.810835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.810852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.810866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.810888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.810904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.810919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.810933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.810949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.810967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.810983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.810998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.811826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.811842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1541de0 is same with the state(5) to be set 00:22:11.163 [2024-07-15 16:03:37.813100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.813123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.813143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.813159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.813174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.163 [2024-07-15 16:03:37.813188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.163 [2024-07-15 16:03:37.813204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.813978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.813992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.164 [2024-07-15 16:03:37.814387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.164 [2024-07-15 16:03:37.814403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.814979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.814994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.815008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.815025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.815040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.815056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.815071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.815087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543270 is same with the state(5) to be set 00:22:11.165 [2024-07-15 16:03:37.816340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.816363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.816386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.816402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.816419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.816433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.816450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.816465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.816482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.816497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.816513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.816527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.816542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.816556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.816581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.165 [2024-07-15 16:03:37.816597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.165 [2024-07-15 16:03:37.816613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.816628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.816644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.816659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.816676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.816691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.816709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.816723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.816740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.816754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.816771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.816785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.816802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.816817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.816833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.816847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.816863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.816885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.816903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.816923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.816939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.816953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.816970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.816989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.166 [2024-07-15 16:03:37.817836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.166 [2024-07-15 16:03:37.817853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.817867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.817895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.817912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.817928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.817942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.817958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.817972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.817989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.818373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.818387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f990 is same with the state(5) to be set 00:22:11.167 [2024-07-15 16:03:37.819636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.819659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.819681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.819697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.819713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.819727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.819744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.819758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.819774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.819788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.819809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.819824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.819840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.819855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.819884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.819901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.819922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.819936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.819952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.819967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.819983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.819997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.820013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.820027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.820043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.820057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.820073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.820086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.820102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.820116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.820132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.820146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.820162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.820176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.820192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.820210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.820226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.820240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.820256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.820269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.820285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.820299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.167 [2024-07-15 16:03:37.820315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.167 [2024-07-15 16:03:37.820329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.820974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.820991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.168 [2024-07-15 16:03:37.821577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.168 [2024-07-15 16:03:37.821592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.821607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.821622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.821637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.821651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1460e20 is same with the state(5) to be set 00:22:11.169 [2024-07-15 16:03:37.822924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.822948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.822969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.822985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.169 [2024-07-15 16:03:37.823808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.169 [2024-07-15 16:03:37.823823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.823840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.823854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.823871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.823893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.823910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.823935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.823952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.823966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.823981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.823996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.170 [2024-07-15 16:03:37.824929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.170 [2024-07-15 16:03:37.824943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14640e0 is same with the state(5) to be set 00:22:11.170 [2024-07-15 16:03:37.827210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:11.170 [2024-07-15 16:03:37.827244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:11.170 [2024-07-15 16:03:37.827266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:11.170 [2024-07-15 16:03:37.827291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:11.170 [2024-07-15 16:03:37.827411] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:11.170 [2024-07-15 16:03:37.827449] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:11.170 [2024-07-15 16:03:37.827557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:11.170 [2024-07-15 16:03:37.827582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:11.170 [2024-07-15 16:03:37.827845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.170 [2024-07-15 16:03:37.827882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c6830 with addr=10.0.0.2, port=4420 00:22:11.170 [2024-07-15 16:03:37.827903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6830 is same with the state(5) to be set 00:22:11.170 [2024-07-15 16:03:37.828061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.171 [2024-07-15 16:03:37.828086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1592240 with addr=10.0.0.2, port=4420 00:22:11.171 [2024-07-15 16:03:37.828102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1592240 is same with the state(5) to be set 00:22:11.171 [2024-07-15 16:03:37.828356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.171 [2024-07-15 16:03:37.828381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb2e20 with addr=10.0.0.2, port=4420 00:22:11.171 [2024-07-15 16:03:37.828397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb2e20 is same with the state(5) to be set 00:22:11.171 [2024-07-15 16:03:37.828528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.171 [2024-07-15 16:03:37.828553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e8c60 with addr=10.0.0.2, port=4420 00:22:11.171 [2024-07-15 16:03:37.828568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e8c60 is same with the state(5) to be set 00:22:11.171 [2024-07-15 16:03:37.829953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.829978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.830971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.830988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.831002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.831019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.831033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.831049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.831063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.831079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.831093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.831110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.831124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.831140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.171 [2024-07-15 16:03:37.831154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.171 [2024-07-15 16:03:37.831170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.172 [2024-07-15 16:03:37.831963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.172 [2024-07-15 16:03:37.831977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15371e0 is same with the state(5) to be set 00:22:11.172 [2024-07-15 16:03:37.833858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:11.172 [2024-07-15 16:03:37.833897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:11.172 [2024-07-15 16:03:37.833917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:11.172 task offset: 25216 on job bdev=Nvme6n1 fails 00:22:11.172 00:22:11.172 Latency(us) 00:22:11.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.172 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.172 Job: Nvme1n1 ended in about 0.95 seconds with error 00:22:11.172 Verification LBA range: start 0x0 length 0x400 00:22:11.172 Nvme1n1 : 0.95 134.38 8.40 67.19 0.00 314201.38 21165.70 296708.17 00:22:11.172 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.172 Job: Nvme2n1 ended in about 0.96 seconds with error 00:22:11.172 Verification LBA range: start 0x0 length 0x400 00:22:11.172 Nvme2n1 : 0.96 133.91 8.37 66.96 0.00 309264.69 20583.16 257872.02 00:22:11.172 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.172 Job: Nvme3n1 ended in about 0.96 seconds with error 00:22:11.172 Verification LBA range: start 0x0 length 0x400 00:22:11.172 Nvme3n1 : 0.96 200.19 12.51 66.73 0.00 228033.23 18835.53 253211.69 00:22:11.172 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.172 Job: Nvme4n1 ended in about 0.96 seconds with error 00:22:11.172 Verification LBA range: start 0x0 length 0x400 00:22:11.172 Nvme4n1 : 0.96 203.66 12.73 66.50 0.00 220818.04 7815.77 250104.79 00:22:11.172 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.172 Job: Nvme5n1 ended in about 0.97 seconds with error 00:22:11.172 Verification LBA range: start 0x0 length 0x400 00:22:11.172 Nvme5n1 : 0.97 132.55 8.28 66.28 0.00 294033.89 20583.16 284280.60 00:22:11.172 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.172 Job: Nvme6n1 ended in about 0.94 seconds with error 00:22:11.172 Verification LBA range: start 0x0 length 0x400 00:22:11.172 Nvme6n1 : 0.94 203.91 12.74 67.97 0.00 209836.75 18447.17 237677.23 00:22:11.172 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.172 Job: Nvme7n1 ended in about 0.94 seconds with error 00:22:11.172 Verification LBA range: start 0x0 length 0x400 00:22:11.172 Nvme7n1 : 0.94 203.67 12.73 67.89 0.00 205582.79 21262.79 253211.69 00:22:11.172 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.172 Job: Nvme8n1 ended in about 0.94 seconds with error 00:22:11.172 Verification LBA range: start 0x0 length 0x400 00:22:11.172 Nvme8n1 : 0.94 203.44 12.71 67.81 0.00 201415.87 30486.38 256318.58 00:22:11.172 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.172 Job: Nvme9n1 ended in about 0.98 seconds with error 00:22:11.172 Verification LBA range: start 0x0 length 0x400 00:22:11.172 Nvme9n1 : 0.98 131.15 8.20 65.58 0.00 273487.90 20291.89 267192.70 00:22:11.172 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.172 Job: Nvme10n1 ended in about 0.97 seconds with error 00:22:11.172 Verification LBA range: start 0x0 length 0x400 00:22:11.172 Nvme10n1 : 0.97 132.11 8.26 66.05 0.00 265043.82 23010.42 274959.93 00:22:11.172 =================================================================================================================== 00:22:11.172 Total : 1678.96 104.94 668.95 0.00 246549.53 7815.77 296708.17 00:22:11.172 [2024-07-15 16:03:37.861356] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:11.172 [2024-07-15 16:03:37.861448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:11.172 [2024-07-15 16:03:37.861849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.172 [2024-07-15 16:03:37.861909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e9280 with addr=10.0.0.2, port=4420 00:22:11.172 [2024-07-15 16:03:37.861933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9280 is same with the state(5) to be set 00:22:11.173 [2024-07-15 16:03:37.862081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.173 [2024-07-15 16:03:37.862107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15793c0 with addr=10.0.0.2, port=4420 00:22:11.173 [2024-07-15 16:03:37.862123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15793c0 is same with the state(5) to be set 00:22:11.173 [2024-07-15 16:03:37.862150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6830 (9): Bad file descriptor 00:22:11.173 [2024-07-15 16:03:37.862175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1592240 (9): Bad file descriptor 00:22:11.173 [2024-07-15 16:03:37.862195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb2e20 (9): Bad file descriptor 00:22:11.173 [2024-07-15 16:03:37.862227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e8c60 (9): Bad file descriptor 00:22:11.173 [2024-07-15 16:03:37.862557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.173 [2024-07-15 16:03:37.862589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148ab10 with addr=10.0.0.2, port=4420 00:22:11.173 [2024-07-15 16:03:37.862611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ab10 is same with the state(5) to be set 00:22:11.173 [2024-07-15 16:03:37.862769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.173 [2024-07-15 16:03:37.862796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2620 with addr=10.0.0.2, port=4420 00:22:11.173 [2024-07-15 16:03:37.862812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f2620 is same with the state(5) to be set 00:22:11.173 [2024-07-15 16:03:37.862953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.173 [2024-07-15 16:03:37.862979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1591b30 with addr=10.0.0.2, port=4420 00:22:11.173 [2024-07-15 16:03:37.862995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1591b30 is same with the state(5) to be set 00:22:11.173 [2024-07-15 16:03:37.863129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.173 [2024-07-15 16:03:37.863154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1587210 with addr=10.0.0.2, port=4420 00:22:11.173 [2024-07-15 16:03:37.863170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1587210 is same with the state(5) to be set 00:22:11.173 [2024-07-15 16:03:37.863188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e9280 (9): Bad file descriptor 00:22:11.173 [2024-07-15 16:03:37.863207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15793c0 (9): Bad file descriptor 00:22:11.173 [2024-07-15 16:03:37.863225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:11.173 [2024-07-15 16:03:37.863239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:11.173 [2024-07-15 16:03:37.863257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:11.173 [2024-07-15 16:03:37.863279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:11.173 [2024-07-15 16:03:37.863294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:11.173 [2024-07-15 16:03:37.863307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:11.173 [2024-07-15 16:03:37.863324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:11.173 [2024-07-15 16:03:37.863338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:11.173 [2024-07-15 16:03:37.863352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:11.173 [2024-07-15 16:03:37.863368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:11.173 [2024-07-15 16:03:37.863381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:11.173 [2024-07-15 16:03:37.863394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:11.173 [2024-07-15 16:03:37.863427] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:11.173 [2024-07-15 16:03:37.863450] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:11.173 [2024-07-15 16:03:37.863469] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:11.173 [2024-07-15 16:03:37.863493] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:11.173 [2024-07-15 16:03:37.863511] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:11.173 [2024-07-15 16:03:37.863530] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:11.173 [2024-07-15 16:03:37.863942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.173 [2024-07-15 16:03:37.863967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.173 [2024-07-15 16:03:37.863989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.173 [2024-07-15 16:03:37.864001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.173 [2024-07-15 16:03:37.864017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148ab10 (9): Bad file descriptor 00:22:11.173 [2024-07-15 16:03:37.864037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f2620 (9): Bad file descriptor 00:22:11.173 [2024-07-15 16:03:37.864055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1591b30 (9): Bad file descriptor 00:22:11.173 [2024-07-15 16:03:37.864073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1587210 (9): Bad file descriptor 00:22:11.173 [2024-07-15 16:03:37.864089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:11.173 [2024-07-15 16:03:37.864102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:11.173 [2024-07-15 16:03:37.864115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:11.173 [2024-07-15 16:03:37.864132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:11.173 [2024-07-15 16:03:37.864147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:11.173 [2024-07-15 16:03:37.864160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:11.173 [2024-07-15 16:03:37.864500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.173 [2024-07-15 16:03:37.864525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.173 [2024-07-15 16:03:37.864539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:11.173 [2024-07-15 16:03:37.864551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:11.173 [2024-07-15 16:03:37.864565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:11.173 [2024-07-15 16:03:37.864582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:11.173 [2024-07-15 16:03:37.864596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:11.173 [2024-07-15 16:03:37.864610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:11.173 [2024-07-15 16:03:37.864625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:11.173 [2024-07-15 16:03:37.864638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:11.173 [2024-07-15 16:03:37.864652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:11.173 [2024-07-15 16:03:37.864667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:11.173 [2024-07-15 16:03:37.864680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:11.173 [2024-07-15 16:03:37.864699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:11.173 [2024-07-15 16:03:37.864747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.173 [2024-07-15 16:03:37.864767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.173 [2024-07-15 16:03:37.864779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.173 [2024-07-15 16:03:37.864791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.431 16:03:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:11.431 16:03:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1202146 00:22:12.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1202146) - No such process 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:12.806 rmmod nvme_tcp 00:22:12.806 rmmod nvme_fabrics 00:22:12.806 rmmod nvme_keyring 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.806 16:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.711 16:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:14.711 00:22:14.711 real 0m8.294s 00:22:14.711 user 0m21.798s 00:22:14.711 sys 0m1.460s 00:22:14.711 16:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:14.711 16:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:14.711 ************************************ 00:22:14.711 END TEST nvmf_shutdown_tc3 00:22:14.711 ************************************ 00:22:14.711 16:03:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:14.711 16:03:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:14.711 00:22:14.711 real 0m28.459s 00:22:14.711 user 1m19.737s 00:22:14.711 sys 0m6.611s 00:22:14.711 16:03:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:14.711 16:03:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:14.711 ************************************ 00:22:14.711 END TEST nvmf_shutdown 00:22:14.711 ************************************ 00:22:14.711 16:03:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:14.711 16:03:41 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:14.711 16:03:41 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:14.711 16:03:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:14.711 16:03:41 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:14.711 16:03:41 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:14.711 16:03:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:14.711 16:03:41 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:14.711 16:03:41 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:14.711 16:03:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:14.711 16:03:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:14.711 16:03:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:14.711 ************************************ 00:22:14.711 START TEST nvmf_multicontroller 00:22:14.711 ************************************ 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:14.711 * Looking for test storage... 00:22:14.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:14.711 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:14.712 16:03:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:17.240 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:17.240 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:17.240 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:17.240 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:22:17.240 00:22:17.240 --- 10.0.0.2 ping statistics --- 00:22:17.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.240 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:22:17.240 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:22:17.240 00:22:17.240 --- 10.0.0.1 ping statistics --- 00:22:17.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.240 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1204607 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1204607 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1204607 ']' 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.241 16:03:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.241 [2024-07-15 16:03:43.888449] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:17.241 [2024-07-15 16:03:43.888541] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.241 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.241 [2024-07-15 16:03:43.956138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:17.241 [2024-07-15 16:03:44.071817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.241 [2024-07-15 16:03:44.071891] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.241 [2024-07-15 16:03:44.071919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.241 [2024-07-15 16:03:44.071932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.241 [2024-07-15 16:03:44.071944] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.241 [2024-07-15 16:03:44.072047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.241 [2024-07-15 16:03:44.072157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.241 [2024-07-15 16:03:44.072159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 [2024-07-15 16:03:44.863072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 Malloc0 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 [2024-07-15 16:03:44.920844] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 [2024-07-15 16:03:44.928724] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 Malloc1 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:18.175 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1204768 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1204768 /var/tmp/bdevperf.sock 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1204768 ']' 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.176 16:03:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.434 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.434 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:18.434 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:18.434 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.434 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.691 NVMe0n1 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.691 1 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.691 request: 00:22:18.691 { 00:22:18.691 "name": "NVMe0", 00:22:18.691 "trtype": "tcp", 00:22:18.691 "traddr": "10.0.0.2", 00:22:18.691 "adrfam": "ipv4", 00:22:18.691 "trsvcid": "4420", 00:22:18.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.691 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:18.691 "hostaddr": "10.0.0.2", 00:22:18.691 "hostsvcid": "60000", 00:22:18.691 "prchk_reftag": false, 00:22:18.691 "prchk_guard": false, 00:22:18.691 "hdgst": false, 00:22:18.691 "ddgst": false, 00:22:18.691 "method": "bdev_nvme_attach_controller", 00:22:18.691 "req_id": 1 00:22:18.691 } 00:22:18.691 Got JSON-RPC error response 00:22:18.691 response: 00:22:18.691 { 00:22:18.691 "code": -114, 00:22:18.691 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:18.691 } 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.691 request: 00:22:18.691 { 00:22:18.691 "name": "NVMe0", 00:22:18.691 "trtype": "tcp", 00:22:18.691 "traddr": "10.0.0.2", 00:22:18.691 "adrfam": "ipv4", 00:22:18.691 "trsvcid": "4420", 00:22:18.691 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:18.691 "hostaddr": "10.0.0.2", 00:22:18.691 "hostsvcid": "60000", 00:22:18.691 "prchk_reftag": false, 00:22:18.691 "prchk_guard": false, 00:22:18.691 "hdgst": false, 00:22:18.691 "ddgst": false, 00:22:18.691 "method": "bdev_nvme_attach_controller", 00:22:18.691 "req_id": 1 00:22:18.691 } 00:22:18.691 Got JSON-RPC error response 00:22:18.691 response: 00:22:18.691 { 00:22:18.691 "code": -114, 00:22:18.691 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:18.691 } 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:18.691 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.692 request: 00:22:18.692 { 00:22:18.692 "name": "NVMe0", 00:22:18.692 "trtype": "tcp", 00:22:18.692 "traddr": "10.0.0.2", 00:22:18.692 "adrfam": "ipv4", 00:22:18.692 "trsvcid": "4420", 00:22:18.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.692 "hostaddr": "10.0.0.2", 00:22:18.692 "hostsvcid": "60000", 00:22:18.692 "prchk_reftag": false, 00:22:18.692 "prchk_guard": false, 00:22:18.692 "hdgst": false, 00:22:18.692 "ddgst": false, 00:22:18.692 "multipath": "disable", 00:22:18.692 "method": "bdev_nvme_attach_controller", 00:22:18.692 "req_id": 1 00:22:18.692 } 00:22:18.692 Got JSON-RPC error response 00:22:18.692 response: 00:22:18.692 { 00:22:18.692 "code": -114, 00:22:18.692 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:18.692 } 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.692 request: 00:22:18.692 { 00:22:18.692 "name": "NVMe0", 00:22:18.692 "trtype": "tcp", 00:22:18.692 "traddr": "10.0.0.2", 00:22:18.692 "adrfam": "ipv4", 00:22:18.692 "trsvcid": "4420", 00:22:18.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.692 "hostaddr": "10.0.0.2", 00:22:18.692 "hostsvcid": "60000", 00:22:18.692 "prchk_reftag": false, 00:22:18.692 "prchk_guard": false, 00:22:18.692 "hdgst": false, 00:22:18.692 "ddgst": false, 00:22:18.692 "multipath": "failover", 00:22:18.692 "method": "bdev_nvme_attach_controller", 00:22:18.692 "req_id": 1 00:22:18.692 } 00:22:18.692 Got JSON-RPC error response 00:22:18.692 response: 00:22:18.692 { 00:22:18.692 "code": -114, 00:22:18.692 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:18.692 } 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.692 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.948 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.948 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.948 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.204 16:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.204 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:19.204 16:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:20.132 0 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1204768 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1204768 ']' 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1204768 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1204768 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1204768' 00:22:20.132 killing process with pid 1204768 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1204768 00:22:20.132 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1204768 00:22:20.388 16:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.388 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.388 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.388 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.388 16:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:20.388 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.389 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:20.646 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:20.646 [2024-07-15 16:03:45.034831] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:20.646 [2024-07-15 16:03:45.034970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204768 ] 00:22:20.646 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.646 [2024-07-15 16:03:45.095679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.646 [2024-07-15 16:03:45.203925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.646 [2024-07-15 16:03:45.869417] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 314d41c9-4f9c-448a-b2a9-beb79985629c already exists 00:22:20.646 [2024-07-15 16:03:45.869456] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:314d41c9-4f9c-448a-b2a9-beb79985629c alias for bdev NVMe1n1 00:22:20.646 [2024-07-15 16:03:45.869482] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:20.646 Running I/O for 1 seconds... 00:22:20.646 00:22:20.646 Latency(us) 00:22:20.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.646 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:20.646 NVMe0n1 : 1.01 17413.43 68.02 0.00 0.00 7319.68 5971.06 13495.56 00:22:20.646 =================================================================================================================== 00:22:20.646 Total : 17413.43 68.02 0.00 0.00 7319.68 5971.06 13495.56 00:22:20.646 Received shutdown signal, test time was about 1.000000 seconds 00:22:20.646 00:22:20.646 Latency(us) 00:22:20.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.646 =================================================================================================================== 00:22:20.646 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:20.646 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.646 rmmod nvme_tcp 00:22:20.646 rmmod nvme_fabrics 00:22:20.646 rmmod nvme_keyring 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1204607 ']' 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1204607 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1204607 ']' 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1204607 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1204607 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1204607' 00:22:20.646 killing process with pid 1204607 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1204607 00:22:20.646 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1204607 00:22:20.904 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.904 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.904 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.904 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.904 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.904 16:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.904 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.904 16:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.815 16:03:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:22.815 00:22:22.815 real 0m8.197s 00:22:22.815 user 0m13.930s 00:22:22.815 sys 0m2.402s 00:22:22.815 16:03:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:23.072 16:03:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.072 ************************************ 00:22:23.072 END TEST nvmf_multicontroller 00:22:23.072 ************************************ 00:22:23.072 16:03:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:23.072 16:03:49 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:23.072 16:03:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:23.072 16:03:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:23.072 16:03:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.072 ************************************ 00:22:23.072 START TEST nvmf_aer 00:22:23.072 ************************************ 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:23.072 * Looking for test storage... 00:22:23.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.072 16:03:49 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:23.073 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.073 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.073 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.073 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.073 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.073 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.073 16:03:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.073 16:03:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.073 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.073 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.073 16:03:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.073 16:03:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.008 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:25.009 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:25.009 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:25.009 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:25.009 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:25.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:22:25.009 00:22:25.009 --- 10.0.0.2 ping statistics --- 00:22:25.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.009 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:22:25.009 00:22:25.009 --- 10.0.0.1 ping statistics --- 00:22:25.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.009 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1207010 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1207010 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1207010 ']' 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.009 16:03:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.009 [2024-07-15 16:03:51.921380] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:25.009 [2024-07-15 16:03:51.921466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.267 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.267 [2024-07-15 16:03:51.999394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.267 [2024-07-15 16:03:52.121563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.267 [2024-07-15 16:03:52.121628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.267 [2024-07-15 16:03:52.121644] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.267 [2024-07-15 16:03:52.121656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.267 [2024-07-15 16:03:52.121667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.267 [2024-07-15 16:03:52.123902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.267 [2024-07-15 16:03:52.123934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.267 [2024-07-15 16:03:52.124049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.267 [2024-07-15 16:03:52.124052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.525 [2024-07-15 16:03:52.286763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.525 Malloc0 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.525 [2024-07-15 16:03:52.340503] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.525 [ 00:22:25.525 { 00:22:25.525 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:25.525 "subtype": "Discovery", 00:22:25.525 "listen_addresses": [], 00:22:25.525 "allow_any_host": true, 00:22:25.525 "hosts": [] 00:22:25.525 }, 00:22:25.525 { 00:22:25.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.525 "subtype": "NVMe", 00:22:25.525 "listen_addresses": [ 00:22:25.525 { 00:22:25.525 "trtype": "TCP", 00:22:25.525 "adrfam": "IPv4", 00:22:25.525 "traddr": "10.0.0.2", 00:22:25.525 "trsvcid": "4420" 00:22:25.525 } 00:22:25.525 ], 00:22:25.525 "allow_any_host": true, 00:22:25.525 "hosts": [], 00:22:25.525 "serial_number": "SPDK00000000000001", 00:22:25.525 "model_number": "SPDK bdev Controller", 00:22:25.525 "max_namespaces": 2, 00:22:25.525 "min_cntlid": 1, 00:22:25.525 "max_cntlid": 65519, 00:22:25.525 "namespaces": [ 00:22:25.525 { 00:22:25.525 "nsid": 1, 00:22:25.525 "bdev_name": "Malloc0", 00:22:25.525 "name": "Malloc0", 00:22:25.525 "nguid": "36288BAB3CE94C0582A49640490FD5E4", 00:22:25.525 "uuid": "36288bab-3ce9-4c05-82a4-9640490fd5e4" 00:22:25.525 } 00:22:25.525 ] 00:22:25.525 } 00:22:25.525 ] 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1207054 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:25.525 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:25.525 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.782 Malloc1 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.782 [ 00:22:25.782 { 00:22:25.782 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:25.782 "subtype": "Discovery", 00:22:25.782 "listen_addresses": [], 00:22:25.782 "allow_any_host": true, 00:22:25.782 "hosts": [] 00:22:25.782 }, 00:22:25.782 { 00:22:25.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.782 "subtype": "NVMe", 00:22:25.782 "listen_addresses": [ 00:22:25.782 { 00:22:25.782 "trtype": "TCP", 00:22:25.782 "adrfam": "IPv4", 00:22:25.782 "traddr": "10.0.0.2", 00:22:25.782 "trsvcid": "4420" 00:22:25.782 } 00:22:25.782 ], 00:22:25.782 "allow_any_host": true, 00:22:25.782 "hosts": [], 00:22:25.782 "serial_number": "SPDK00000000000001", 00:22:25.782 "model_number": "SPDK bdev Controller", 00:22:25.782 "max_namespaces": 2, 00:22:25.782 "min_cntlid": 1, 00:22:25.782 "max_cntlid": 65519, 00:22:25.782 "namespaces": [ 00:22:25.782 { 00:22:25.782 "nsid": 1, 00:22:25.782 "bdev_name": "Malloc0", 00:22:25.782 "name": "Malloc0", 00:22:25.782 "nguid": "36288BAB3CE94C0582A49640490FD5E4", 00:22:25.782 "uuid": "36288bab-3ce9-4c05-82a4-9640490fd5e4" 00:22:25.782 }, 00:22:25.782 { 00:22:25.782 "nsid": 2, 00:22:25.782 "bdev_name": "Malloc1", 00:22:25.782 "name": "Malloc1", 00:22:25.782 "nguid": "C8D80DD9B92641F48A82DEAF398E0F6E", 00:22:25.782 "uuid": "c8d80dd9-b926-41f4-8a82-deaf398e0f6e" 00:22:25.782 } 00:22:25.782 ] 00:22:25.782 } 00:22:25.782 ] 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1207054 00:22:25.782 Asynchronous Event Request test 00:22:25.782 Attaching to 10.0.0.2 00:22:25.782 Attached to 10.0.0.2 00:22:25.782 Registering asynchronous event callbacks... 00:22:25.782 Starting namespace attribute notice tests for all controllers... 00:22:25.782 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:25.782 aer_cb - Changed Namespace 00:22:25.782 Cleaning up... 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:25.782 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:25.783 rmmod nvme_tcp 00:22:26.039 rmmod nvme_fabrics 00:22:26.039 rmmod nvme_keyring 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1207010 ']' 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1207010 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1207010 ']' 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1207010 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1207010 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1207010' 00:22:26.039 killing process with pid 1207010 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1207010 00:22:26.039 16:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1207010 00:22:26.298 16:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:26.298 16:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:26.298 16:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:26.298 16:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:26.298 16:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:26.298 16:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.298 16:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.298 16:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.197 16:03:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:28.197 00:22:28.197 real 0m5.279s 00:22:28.197 user 0m4.194s 00:22:28.197 sys 0m1.796s 00:22:28.197 16:03:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:28.197 16:03:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:28.197 ************************************ 00:22:28.197 END TEST nvmf_aer 00:22:28.197 ************************************ 00:22:28.197 16:03:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:28.197 16:03:55 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:28.197 16:03:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:28.197 16:03:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:28.197 16:03:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:28.454 ************************************ 00:22:28.454 START TEST nvmf_async_init 00:22:28.454 ************************************ 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:28.454 * Looking for test storage... 00:22:28.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0bbe30a7afab40fcb9cf44ffc70e7122 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.454 16:03:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:30.408 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:30.408 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:30.408 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:30.408 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:30.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:22:30.408 00:22:30.408 --- 10.0.0.2 ping statistics --- 00:22:30.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.408 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:22:30.408 00:22:30.408 --- 10.0.0.1 ping statistics --- 00:22:30.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.408 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1208989 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1208989 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1208989 ']' 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.408 16:03:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:30.408 [2024-07-15 16:03:57.336290] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:30.408 [2024-07-15 16:03:57.336374] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.666 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.666 [2024-07-15 16:03:57.405572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.666 [2024-07-15 16:03:57.525556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.666 [2024-07-15 16:03:57.525607] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.666 [2024-07-15 16:03:57.525633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.666 [2024-07-15 16:03:57.525644] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.666 [2024-07-15 16:03:57.525654] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.666 [2024-07-15 16:03:57.525680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.597 [2024-07-15 16:03:58.354496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.597 null0 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.597 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0bbe30a7afab40fcb9cf44ffc70e7122 00:22:31.598 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.598 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.598 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.598 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:31.598 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.598 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.598 [2024-07-15 16:03:58.394719] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.598 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.598 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:31.598 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.598 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.855 nvme0n1 00:22:31.855 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.855 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:31.855 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.855 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.855 [ 00:22:31.855 { 00:22:31.855 "name": "nvme0n1", 00:22:31.855 "aliases": [ 00:22:31.855 "0bbe30a7-afab-40fc-b9cf-44ffc70e7122" 00:22:31.855 ], 00:22:31.855 "product_name": "NVMe disk", 00:22:31.855 "block_size": 512, 00:22:31.855 "num_blocks": 2097152, 00:22:31.855 "uuid": "0bbe30a7-afab-40fc-b9cf-44ffc70e7122", 00:22:31.855 "assigned_rate_limits": { 00:22:31.855 "rw_ios_per_sec": 0, 00:22:31.855 "rw_mbytes_per_sec": 0, 00:22:31.855 "r_mbytes_per_sec": 0, 00:22:31.855 "w_mbytes_per_sec": 0 00:22:31.855 }, 00:22:31.855 "claimed": false, 00:22:31.855 "zoned": false, 00:22:31.855 "supported_io_types": { 00:22:31.855 "read": true, 00:22:31.855 "write": true, 00:22:31.855 "unmap": false, 00:22:31.855 "flush": true, 00:22:31.855 "reset": true, 00:22:31.855 "nvme_admin": true, 00:22:31.855 "nvme_io": true, 00:22:31.855 "nvme_io_md": false, 00:22:31.855 "write_zeroes": true, 00:22:31.855 "zcopy": false, 00:22:31.855 "get_zone_info": false, 00:22:31.855 "zone_management": false, 00:22:31.855 "zone_append": false, 00:22:31.855 "compare": true, 00:22:31.855 "compare_and_write": true, 00:22:31.855 "abort": true, 00:22:31.855 "seek_hole": false, 00:22:31.855 "seek_data": false, 00:22:31.855 "copy": true, 00:22:31.855 "nvme_iov_md": false 00:22:31.855 }, 00:22:31.855 "memory_domains": [ 00:22:31.855 { 00:22:31.855 "dma_device_id": "system", 00:22:31.855 "dma_device_type": 1 00:22:31.855 } 00:22:31.855 ], 00:22:31.855 "driver_specific": { 00:22:31.855 "nvme": [ 00:22:31.855 { 00:22:31.855 "trid": { 00:22:31.855 "trtype": "TCP", 00:22:31.855 "adrfam": "IPv4", 00:22:31.855 "traddr": "10.0.0.2", 00:22:31.855 "trsvcid": "4420", 00:22:31.855 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:31.855 }, 00:22:31.855 "ctrlr_data": { 00:22:31.856 "cntlid": 1, 00:22:31.856 "vendor_id": "0x8086", 00:22:31.856 "model_number": "SPDK bdev Controller", 00:22:31.856 "serial_number": "00000000000000000000", 00:22:31.856 "firmware_revision": "24.09", 00:22:31.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:31.856 "oacs": { 00:22:31.856 "security": 0, 00:22:31.856 "format": 0, 00:22:31.856 "firmware": 0, 00:22:31.856 "ns_manage": 0 00:22:31.856 }, 00:22:31.856 "multi_ctrlr": true, 00:22:31.856 "ana_reporting": false 00:22:31.856 }, 00:22:31.856 "vs": { 00:22:31.856 "nvme_version": "1.3" 00:22:31.856 }, 00:22:31.856 "ns_data": { 00:22:31.856 "id": 1, 00:22:31.856 "can_share": true 00:22:31.856 } 00:22:31.856 } 00:22:31.856 ], 00:22:31.856 "mp_policy": "active_passive" 00:22:31.856 } 00:22:31.856 } 00:22:31.856 ] 00:22:31.856 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.856 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:31.856 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.856 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.856 [2024-07-15 16:03:58.648470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:31.856 [2024-07-15 16:03:58.648568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c3090 (9): Bad file descriptor 00:22:31.856 [2024-07-15 16:03:58.781050] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:31.856 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.856 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:31.856 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.856 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.114 [ 00:22:32.114 { 00:22:32.114 "name": "nvme0n1", 00:22:32.114 "aliases": [ 00:22:32.114 "0bbe30a7-afab-40fc-b9cf-44ffc70e7122" 00:22:32.114 ], 00:22:32.114 "product_name": "NVMe disk", 00:22:32.114 "block_size": 512, 00:22:32.114 "num_blocks": 2097152, 00:22:32.114 "uuid": "0bbe30a7-afab-40fc-b9cf-44ffc70e7122", 00:22:32.114 "assigned_rate_limits": { 00:22:32.114 "rw_ios_per_sec": 0, 00:22:32.114 "rw_mbytes_per_sec": 0, 00:22:32.114 "r_mbytes_per_sec": 0, 00:22:32.114 "w_mbytes_per_sec": 0 00:22:32.114 }, 00:22:32.114 "claimed": false, 00:22:32.114 "zoned": false, 00:22:32.114 "supported_io_types": { 00:22:32.114 "read": true, 00:22:32.114 "write": true, 00:22:32.114 "unmap": false, 00:22:32.114 "flush": true, 00:22:32.114 "reset": true, 00:22:32.114 "nvme_admin": true, 00:22:32.114 "nvme_io": true, 00:22:32.114 "nvme_io_md": false, 00:22:32.114 "write_zeroes": true, 00:22:32.114 "zcopy": false, 00:22:32.114 "get_zone_info": false, 00:22:32.114 "zone_management": false, 00:22:32.114 "zone_append": false, 00:22:32.114 "compare": true, 00:22:32.114 "compare_and_write": true, 00:22:32.114 "abort": true, 00:22:32.114 "seek_hole": false, 00:22:32.114 "seek_data": false, 00:22:32.114 "copy": true, 00:22:32.114 "nvme_iov_md": false 00:22:32.114 }, 00:22:32.114 "memory_domains": [ 00:22:32.114 { 00:22:32.114 "dma_device_id": "system", 00:22:32.114 "dma_device_type": 1 00:22:32.114 } 00:22:32.114 ], 00:22:32.114 "driver_specific": { 00:22:32.114 "nvme": [ 00:22:32.114 { 00:22:32.114 "trid": { 00:22:32.114 "trtype": "TCP", 00:22:32.114 "adrfam": "IPv4", 00:22:32.114 "traddr": "10.0.0.2", 00:22:32.114 "trsvcid": "4420", 00:22:32.114 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:32.114 }, 00:22:32.114 "ctrlr_data": { 00:22:32.114 "cntlid": 2, 00:22:32.114 "vendor_id": "0x8086", 00:22:32.114 "model_number": "SPDK bdev Controller", 00:22:32.114 "serial_number": "00000000000000000000", 00:22:32.114 "firmware_revision": "24.09", 00:22:32.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:32.114 "oacs": { 00:22:32.114 "security": 0, 00:22:32.114 "format": 0, 00:22:32.114 "firmware": 0, 00:22:32.114 "ns_manage": 0 00:22:32.114 }, 00:22:32.114 "multi_ctrlr": true, 00:22:32.114 "ana_reporting": false 00:22:32.114 }, 00:22:32.114 "vs": { 00:22:32.114 "nvme_version": "1.3" 00:22:32.114 }, 00:22:32.114 "ns_data": { 00:22:32.114 "id": 1, 00:22:32.114 "can_share": true 00:22:32.114 } 00:22:32.114 } 00:22:32.114 ], 00:22:32.114 "mp_policy": "active_passive" 00:22:32.114 } 00:22:32.114 } 00:22:32.114 ] 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.S3Isn9lDs0 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.S3Isn9lDs0 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.114 [2024-07-15 16:03:58.833179] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.114 [2024-07-15 16:03:58.833312] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.S3Isn9lDs0 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.114 [2024-07-15 16:03:58.841201] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.S3Isn9lDs0 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.114 [2024-07-15 16:03:58.849214] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.114 [2024-07-15 16:03:58.849284] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:32.114 nvme0n1 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.114 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.114 [ 00:22:32.114 { 00:22:32.114 "name": "nvme0n1", 00:22:32.114 "aliases": [ 00:22:32.114 "0bbe30a7-afab-40fc-b9cf-44ffc70e7122" 00:22:32.114 ], 00:22:32.114 "product_name": "NVMe disk", 00:22:32.114 "block_size": 512, 00:22:32.114 "num_blocks": 2097152, 00:22:32.114 "uuid": "0bbe30a7-afab-40fc-b9cf-44ffc70e7122", 00:22:32.114 "assigned_rate_limits": { 00:22:32.114 "rw_ios_per_sec": 0, 00:22:32.114 "rw_mbytes_per_sec": 0, 00:22:32.114 "r_mbytes_per_sec": 0, 00:22:32.114 "w_mbytes_per_sec": 0 00:22:32.114 }, 00:22:32.114 "claimed": false, 00:22:32.114 "zoned": false, 00:22:32.114 "supported_io_types": { 00:22:32.114 "read": true, 00:22:32.114 "write": true, 00:22:32.114 "unmap": false, 00:22:32.114 "flush": true, 00:22:32.114 "reset": true, 00:22:32.114 "nvme_admin": true, 00:22:32.114 "nvme_io": true, 00:22:32.114 "nvme_io_md": false, 00:22:32.114 "write_zeroes": true, 00:22:32.114 "zcopy": false, 00:22:32.114 "get_zone_info": false, 00:22:32.114 "zone_management": false, 00:22:32.114 "zone_append": false, 00:22:32.114 "compare": true, 00:22:32.114 "compare_and_write": true, 00:22:32.114 "abort": true, 00:22:32.114 "seek_hole": false, 00:22:32.114 "seek_data": false, 00:22:32.114 "copy": true, 00:22:32.114 "nvme_iov_md": false 00:22:32.114 }, 00:22:32.114 "memory_domains": [ 00:22:32.114 { 00:22:32.114 "dma_device_id": "system", 00:22:32.114 "dma_device_type": 1 00:22:32.114 } 00:22:32.114 ], 00:22:32.114 "driver_specific": { 00:22:32.114 "nvme": [ 00:22:32.114 { 00:22:32.115 "trid": { 00:22:32.115 "trtype": "TCP", 00:22:32.115 "adrfam": "IPv4", 00:22:32.115 "traddr": "10.0.0.2", 00:22:32.115 "trsvcid": "4421", 00:22:32.115 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:32.115 }, 00:22:32.115 "ctrlr_data": { 00:22:32.115 "cntlid": 3, 00:22:32.115 "vendor_id": "0x8086", 00:22:32.115 "model_number": "SPDK bdev Controller", 00:22:32.115 "serial_number": "00000000000000000000", 00:22:32.115 "firmware_revision": "24.09", 00:22:32.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:32.115 "oacs": { 00:22:32.115 "security": 0, 00:22:32.115 "format": 0, 00:22:32.115 "firmware": 0, 00:22:32.115 "ns_manage": 0 00:22:32.115 }, 00:22:32.115 "multi_ctrlr": true, 00:22:32.115 "ana_reporting": false 00:22:32.115 }, 00:22:32.115 "vs": { 00:22:32.115 "nvme_version": "1.3" 00:22:32.115 }, 00:22:32.115 "ns_data": { 00:22:32.115 "id": 1, 00:22:32.115 "can_share": true 00:22:32.115 } 00:22:32.115 } 00:22:32.115 ], 00:22:32.115 "mp_policy": "active_passive" 00:22:32.115 } 00:22:32.115 } 00:22:32.115 ] 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.S3Isn9lDs0 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.115 16:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.115 rmmod nvme_tcp 00:22:32.115 rmmod nvme_fabrics 00:22:32.115 rmmod nvme_keyring 00:22:32.115 16:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.115 16:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:32.115 16:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:32.115 16:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1208989 ']' 00:22:32.115 16:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1208989 00:22:32.115 16:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1208989 ']' 00:22:32.115 16:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1208989 00:22:32.115 16:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:32.115 16:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.115 16:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1208989 00:22:32.373 16:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:32.374 16:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:32.374 16:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1208989' 00:22:32.374 killing process with pid 1208989 00:22:32.374 16:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1208989 00:22:32.374 [2024-07-15 16:03:59.046833] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:32.374 [2024-07-15 16:03:59.046884] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:32.374 16:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1208989 00:22:32.632 16:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.632 16:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:32.632 16:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:32.632 16:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.632 16:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.632 16:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.632 16:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.632 16:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.533 16:04:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.533 00:22:34.533 real 0m6.220s 00:22:34.533 user 0m3.027s 00:22:34.533 sys 0m1.848s 00:22:34.533 16:04:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:34.533 16:04:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.533 ************************************ 00:22:34.533 END TEST nvmf_async_init 00:22:34.533 ************************************ 00:22:34.533 16:04:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:34.533 16:04:01 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:34.533 16:04:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:34.533 16:04:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:34.533 16:04:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.533 ************************************ 00:22:34.533 START TEST dma 00:22:34.533 ************************************ 00:22:34.533 16:04:01 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:34.533 * Looking for test storage... 00:22:34.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.533 16:04:01 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.533 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:34.533 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.533 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.533 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.533 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.533 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.533 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.533 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.533 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.533 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.533 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.792 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.792 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.792 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.792 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.792 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.792 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.792 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.793 16:04:01 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.793 16:04:01 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.793 16:04:01 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.793 16:04:01 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.793 16:04:01 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.793 16:04:01 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.793 16:04:01 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:34.793 16:04:01 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.793 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:34.793 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.793 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.793 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.793 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.793 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.793 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.793 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.793 16:04:01 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.793 16:04:01 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:34.793 16:04:01 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:34.793 00:22:34.793 real 0m0.068s 00:22:34.793 user 0m0.034s 00:22:34.793 sys 0m0.039s 00:22:34.793 16:04:01 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:34.793 16:04:01 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:34.793 ************************************ 00:22:34.793 END TEST dma 00:22:34.793 ************************************ 00:22:34.793 16:04:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:34.793 16:04:01 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:34.793 16:04:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:34.793 16:04:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:34.793 16:04:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.793 ************************************ 00:22:34.793 START TEST nvmf_identify 00:22:34.793 ************************************ 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:34.793 * Looking for test storage... 00:22:34.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:34.793 16:04:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:36.743 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:36.743 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:36.743 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:36.743 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:36.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:22:36.743 00:22:36.743 --- 10.0.0.2 ping statistics --- 00:22:36.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.743 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:22:36.743 00:22:36.743 --- 10.0.0.1 ping statistics --- 00:22:36.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.743 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:36.743 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1211242 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1211242 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1211242 ']' 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.744 16:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:36.744 [2024-07-15 16:04:03.650011] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:36.744 [2024-07-15 16:04:03.650102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.002 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.002 [2024-07-15 16:04:03.714941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.002 [2024-07-15 16:04:03.822537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.002 [2024-07-15 16:04:03.822584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.002 [2024-07-15 16:04:03.822615] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.002 [2024-07-15 16:04:03.822626] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.002 [2024-07-15 16:04:03.822635] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.002 [2024-07-15 16:04:03.822717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.002 [2024-07-15 16:04:03.822782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.002 [2024-07-15 16:04:03.822850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.002 [2024-07-15 16:04:03.822853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:37.934 [2024-07-15 16:04:04.591950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:37.934 Malloc0 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:37.934 [2024-07-15 16:04:04.669205] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.934 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:37.934 [ 00:22:37.934 { 00:22:37.934 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:37.934 "subtype": "Discovery", 00:22:37.934 "listen_addresses": [ 00:22:37.934 { 00:22:37.934 "trtype": "TCP", 00:22:37.934 "adrfam": "IPv4", 00:22:37.934 "traddr": "10.0.0.2", 00:22:37.934 "trsvcid": "4420" 00:22:37.934 } 00:22:37.934 ], 00:22:37.934 "allow_any_host": true, 00:22:37.934 "hosts": [] 00:22:37.934 }, 00:22:37.934 { 00:22:37.934 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.934 "subtype": "NVMe", 00:22:37.934 "listen_addresses": [ 00:22:37.934 { 00:22:37.934 "trtype": "TCP", 00:22:37.934 "adrfam": "IPv4", 00:22:37.934 "traddr": "10.0.0.2", 00:22:37.934 "trsvcid": "4420" 00:22:37.934 } 00:22:37.934 ], 00:22:37.934 "allow_any_host": true, 00:22:37.934 "hosts": [], 00:22:37.934 "serial_number": "SPDK00000000000001", 00:22:37.934 "model_number": "SPDK bdev Controller", 00:22:37.934 "max_namespaces": 32, 00:22:37.934 "min_cntlid": 1, 00:22:37.934 "max_cntlid": 65519, 00:22:37.934 "namespaces": [ 00:22:37.934 { 00:22:37.934 "nsid": 1, 00:22:37.934 "bdev_name": "Malloc0", 00:22:37.934 "name": "Malloc0", 00:22:37.935 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:37.935 "eui64": "ABCDEF0123456789", 00:22:37.935 "uuid": "c291e8f9-06c4-477c-87ed-6093e21207db" 00:22:37.935 } 00:22:37.935 ] 00:22:37.935 } 00:22:37.935 ] 00:22:37.935 16:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.935 16:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:37.935 [2024-07-15 16:04:04.712205] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:37.935 [2024-07-15 16:04:04.712250] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211394 ] 00:22:37.935 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.935 [2024-07-15 16:04:04.747412] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:37.935 [2024-07-15 16:04:04.747479] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:37.935 [2024-07-15 16:04:04.747490] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:37.935 [2024-07-15 16:04:04.747506] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:37.935 [2024-07-15 16:04:04.747516] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:37.935 [2024-07-15 16:04:04.750945] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:37.935 [2024-07-15 16:04:04.751011] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1298540 0 00:22:37.935 [2024-07-15 16:04:04.757890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:37.935 [2024-07-15 16:04:04.757914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:37.935 [2024-07-15 16:04:04.757923] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:37.935 [2024-07-15 16:04:04.757929] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:37.935 [2024-07-15 16:04:04.757988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.758003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.758012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1298540) 00:22:37.935 [2024-07-15 16:04:04.758032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:37.935 [2024-07-15 16:04:04.758059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f83c0, cid 0, qid 0 00:22:37.935 [2024-07-15 16:04:04.765893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.935 [2024-07-15 16:04:04.765912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.935 [2024-07-15 16:04:04.765919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.765927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f83c0) on tqpair=0x1298540 00:22:37.935 [2024-07-15 16:04:04.765945] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:37.935 [2024-07-15 16:04:04.765958] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:37.935 [2024-07-15 16:04:04.765967] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:37.935 [2024-07-15 16:04:04.765995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.766005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.766011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1298540) 00:22:37.935 [2024-07-15 16:04:04.766023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.935 [2024-07-15 16:04:04.766047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f83c0, cid 0, qid 0 00:22:37.935 [2024-07-15 16:04:04.766227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.935 [2024-07-15 16:04:04.766241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.935 [2024-07-15 16:04:04.766248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.766254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f83c0) on tqpair=0x1298540 00:22:37.935 [2024-07-15 16:04:04.766264] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:37.935 [2024-07-15 16:04:04.766277] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:37.935 [2024-07-15 16:04:04.766289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.766297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.766303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1298540) 00:22:37.935 [2024-07-15 16:04:04.766313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.935 [2024-07-15 16:04:04.766335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f83c0, cid 0, qid 0 00:22:37.935 [2024-07-15 16:04:04.766463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.935 [2024-07-15 16:04:04.766475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.935 [2024-07-15 16:04:04.766481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.766488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f83c0) on tqpair=0x1298540 00:22:37.935 [2024-07-15 16:04:04.766497] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:37.935 [2024-07-15 16:04:04.766512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:37.935 [2024-07-15 16:04:04.766523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.766531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.766537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1298540) 00:22:37.935 [2024-07-15 16:04:04.766548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.935 [2024-07-15 16:04:04.766568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f83c0, cid 0, qid 0 00:22:37.935 [2024-07-15 16:04:04.766698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.935 [2024-07-15 16:04:04.766713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.935 [2024-07-15 16:04:04.766720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.766727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f83c0) on tqpair=0x1298540 00:22:37.935 [2024-07-15 16:04:04.766737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:37.935 [2024-07-15 16:04:04.766754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.766763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.766774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1298540) 00:22:37.935 [2024-07-15 16:04:04.766785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.935 [2024-07-15 16:04:04.766806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f83c0, cid 0, qid 0 00:22:37.935 [2024-07-15 16:04:04.766942] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.935 [2024-07-15 16:04:04.766957] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.935 [2024-07-15 16:04:04.766964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.766971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f83c0) on tqpair=0x1298540 00:22:37.935 [2024-07-15 16:04:04.766980] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:37.935 [2024-07-15 16:04:04.766989] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:37.935 [2024-07-15 16:04:04.767002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:37.935 [2024-07-15 16:04:04.767113] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:37.935 [2024-07-15 16:04:04.767121] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:37.935 [2024-07-15 16:04:04.767138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.767145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.767152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1298540) 00:22:37.935 [2024-07-15 16:04:04.767162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.935 [2024-07-15 16:04:04.767199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f83c0, cid 0, qid 0 00:22:37.935 [2024-07-15 16:04:04.767410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.935 [2024-07-15 16:04:04.767426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.935 [2024-07-15 16:04:04.767433] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.767439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f83c0) on tqpair=0x1298540 00:22:37.935 [2024-07-15 16:04:04.767448] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:37.935 [2024-07-15 16:04:04.767464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.767473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.767479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1298540) 00:22:37.935 [2024-07-15 16:04:04.767490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.935 [2024-07-15 16:04:04.767511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f83c0, cid 0, qid 0 00:22:37.935 [2024-07-15 16:04:04.767632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.935 [2024-07-15 16:04:04.767644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.935 [2024-07-15 16:04:04.767651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.767658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f83c0) on tqpair=0x1298540 00:22:37.935 [2024-07-15 16:04:04.767667] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:37.935 [2024-07-15 16:04:04.767675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:37.935 [2024-07-15 16:04:04.767694] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:37.935 [2024-07-15 16:04:04.767709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:37.935 [2024-07-15 16:04:04.767727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.935 [2024-07-15 16:04:04.767735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1298540) 00:22:37.936 [2024-07-15 16:04:04.767746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.936 [2024-07-15 16:04:04.767767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f83c0, cid 0, qid 0 00:22:37.936 [2024-07-15 16:04:04.767956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:37.936 [2024-07-15 16:04:04.767970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:37.936 [2024-07-15 16:04:04.767977] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.767984] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1298540): datao=0, datal=4096, cccid=0 00:22:37.936 [2024-07-15 16:04:04.767993] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f83c0) on tqpair(0x1298540): expected_datao=0, payload_size=4096 00:22:37.936 [2024-07-15 16:04:04.768001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768013] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768021] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.936 [2024-07-15 16:04:04.768054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.936 [2024-07-15 16:04:04.768060] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f83c0) on tqpair=0x1298540 00:22:37.936 [2024-07-15 16:04:04.768080] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:37.936 [2024-07-15 16:04:04.768093] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:37.936 [2024-07-15 16:04:04.768102] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:37.936 [2024-07-15 16:04:04.768112] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:37.936 [2024-07-15 16:04:04.768120] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:37.936 [2024-07-15 16:04:04.768128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:37.936 [2024-07-15 16:04:04.768143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:37.936 [2024-07-15 16:04:04.768156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1298540) 00:22:37.936 [2024-07-15 16:04:04.768181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:37.936 [2024-07-15 16:04:04.768203] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f83c0, cid 0, qid 0 00:22:37.936 [2024-07-15 16:04:04.768338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.936 [2024-07-15 16:04:04.768357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.936 [2024-07-15 16:04:04.768365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f83c0) on tqpair=0x1298540 00:22:37.936 [2024-07-15 16:04:04.768386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1298540) 00:22:37.936 [2024-07-15 16:04:04.768410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.936 [2024-07-15 16:04:04.768420] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1298540) 00:22:37.936 [2024-07-15 16:04:04.768442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.936 [2024-07-15 16:04:04.768452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1298540) 00:22:37.936 [2024-07-15 16:04:04.768473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.936 [2024-07-15 16:04:04.768483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:37.936 [2024-07-15 16:04:04.768504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.936 [2024-07-15 16:04:04.768513] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:37.936 [2024-07-15 16:04:04.768533] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:37.936 [2024-07-15 16:04:04.768546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1298540) 00:22:37.936 [2024-07-15 16:04:04.768564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.936 [2024-07-15 16:04:04.768601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f83c0, cid 0, qid 0 00:22:37.936 [2024-07-15 16:04:04.768613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8540, cid 1, qid 0 00:22:37.936 [2024-07-15 16:04:04.768620] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f86c0, cid 2, qid 0 00:22:37.936 [2024-07-15 16:04:04.768628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:37.936 [2024-07-15 16:04:04.768651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f89c0, cid 4, qid 0 00:22:37.936 [2024-07-15 16:04:04.768857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.936 [2024-07-15 16:04:04.768869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.936 [2024-07-15 16:04:04.768883] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f89c0) on tqpair=0x1298540 00:22:37.936 [2024-07-15 16:04:04.768911] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:37.936 [2024-07-15 16:04:04.768925] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:37.936 [2024-07-15 16:04:04.768944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.768954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1298540) 00:22:37.936 [2024-07-15 16:04:04.768965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.936 [2024-07-15 16:04:04.768987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f89c0, cid 4, qid 0 00:22:37.936 [2024-07-15 16:04:04.769122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:37.936 [2024-07-15 16:04:04.769134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:37.936 [2024-07-15 16:04:04.769141] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.769147] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1298540): datao=0, datal=4096, cccid=4 00:22:37.936 [2024-07-15 16:04:04.769155] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f89c0) on tqpair(0x1298540): expected_datao=0, payload_size=4096 00:22:37.936 [2024-07-15 16:04:04.769162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.769179] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.769187] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.769270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.936 [2024-07-15 16:04:04.769281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.936 [2024-07-15 16:04:04.769288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.769295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f89c0) on tqpair=0x1298540 00:22:37.936 [2024-07-15 16:04:04.769314] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:37.936 [2024-07-15 16:04:04.769356] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.769367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1298540) 00:22:37.936 [2024-07-15 16:04:04.769378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.936 [2024-07-15 16:04:04.769389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.769396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.769403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1298540) 00:22:37.936 [2024-07-15 16:04:04.769412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.936 [2024-07-15 16:04:04.769439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f89c0, cid 4, qid 0 00:22:37.936 [2024-07-15 16:04:04.769451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8b40, cid 5, qid 0 00:22:37.936 [2024-07-15 16:04:04.772887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:37.936 [2024-07-15 16:04:04.772904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:37.936 [2024-07-15 16:04:04.772911] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.772917] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1298540): datao=0, datal=1024, cccid=4 00:22:37.936 [2024-07-15 16:04:04.772925] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f89c0) on tqpair(0x1298540): expected_datao=0, payload_size=1024 00:22:37.936 [2024-07-15 16:04:04.772932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.772941] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.772949] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.772961] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.936 [2024-07-15 16:04:04.772971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.936 [2024-07-15 16:04:04.772978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.772985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8b40) on tqpair=0x1298540 00:22:37.936 [2024-07-15 16:04:04.811893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.936 [2024-07-15 16:04:04.811912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.936 [2024-07-15 16:04:04.811919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.811925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f89c0) on tqpair=0x1298540 00:22:37.936 [2024-07-15 16:04:04.811945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.936 [2024-07-15 16:04:04.811954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1298540) 00:22:37.936 [2024-07-15 16:04:04.811965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.936 [2024-07-15 16:04:04.812009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f89c0, cid 4, qid 0 00:22:37.937 [2024-07-15 16:04:04.812198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:37.937 [2024-07-15 16:04:04.812210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:37.937 [2024-07-15 16:04:04.812217] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:37.937 [2024-07-15 16:04:04.812224] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1298540): datao=0, datal=3072, cccid=4 00:22:37.937 [2024-07-15 16:04:04.812231] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f89c0) on tqpair(0x1298540): expected_datao=0, payload_size=3072 00:22:37.937 [2024-07-15 16:04:04.812239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.937 [2024-07-15 16:04:04.812258] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:37.937 [2024-07-15 16:04:04.812267] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:37.937 [2024-07-15 16:04:04.854036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:37.937 [2024-07-15 16:04:04.854055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:37.937 [2024-07-15 16:04:04.854062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:37.937 [2024-07-15 16:04:04.854069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f89c0) on tqpair=0x1298540 00:22:37.937 [2024-07-15 16:04:04.854085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:37.937 [2024-07-15 16:04:04.854095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1298540) 00:22:37.937 [2024-07-15 16:04:04.854106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.937 [2024-07-15 16:04:04.854135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f89c0, cid 4, qid 0 00:22:37.937 [2024-07-15 16:04:04.854305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:37.937 [2024-07-15 16:04:04.854318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:37.937 [2024-07-15 16:04:04.854325] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:37.937 [2024-07-15 16:04:04.854332] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1298540): datao=0, datal=8, cccid=4 00:22:37.937 [2024-07-15 16:04:04.854339] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f89c0) on tqpair(0x1298540): expected_datao=0, payload_size=8 00:22:37.937 [2024-07-15 16:04:04.854346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:37.937 [2024-07-15 16:04:04.854356] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:37.937 [2024-07-15 16:04:04.854364] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.195 [2024-07-15 16:04:04.898892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.195 [2024-07-15 16:04:04.898914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.195 [2024-07-15 16:04:04.898938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.195 [2024-07-15 16:04:04.898945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f89c0) on tqpair=0x1298540 00:22:38.195 ===================================================== 00:22:38.195 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:38.195 ===================================================== 00:22:38.195 Controller Capabilities/Features 00:22:38.195 ================================ 00:22:38.195 Vendor ID: 0000 00:22:38.195 Subsystem Vendor ID: 0000 00:22:38.195 Serial Number: .................... 00:22:38.195 Model Number: ........................................ 00:22:38.195 Firmware Version: 24.09 00:22:38.195 Recommended Arb Burst: 0 00:22:38.195 IEEE OUI Identifier: 00 00 00 00:22:38.195 Multi-path I/O 00:22:38.195 May have multiple subsystem ports: No 00:22:38.195 May have multiple controllers: No 00:22:38.195 Associated with SR-IOV VF: No 00:22:38.195 Max Data Transfer Size: 131072 00:22:38.195 Max Number of Namespaces: 0 00:22:38.195 Max Number of I/O Queues: 1024 00:22:38.195 NVMe Specification Version (VS): 1.3 00:22:38.195 NVMe Specification Version (Identify): 1.3 00:22:38.195 Maximum Queue Entries: 128 00:22:38.195 Contiguous Queues Required: Yes 00:22:38.195 Arbitration Mechanisms Supported 00:22:38.195 Weighted Round Robin: Not Supported 00:22:38.195 Vendor Specific: Not Supported 00:22:38.195 Reset Timeout: 15000 ms 00:22:38.195 Doorbell Stride: 4 bytes 00:22:38.195 NVM Subsystem Reset: Not Supported 00:22:38.195 Command Sets Supported 00:22:38.195 NVM Command Set: Supported 00:22:38.195 Boot Partition: Not Supported 00:22:38.195 Memory Page Size Minimum: 4096 bytes 00:22:38.195 Memory Page Size Maximum: 4096 bytes 00:22:38.195 Persistent Memory Region: Not Supported 00:22:38.195 Optional Asynchronous Events Supported 00:22:38.195 Namespace Attribute Notices: Not Supported 00:22:38.195 Firmware Activation Notices: Not Supported 00:22:38.195 ANA Change Notices: Not Supported 00:22:38.195 PLE Aggregate Log Change Notices: Not Supported 00:22:38.195 LBA Status Info Alert Notices: Not Supported 00:22:38.195 EGE Aggregate Log Change Notices: Not Supported 00:22:38.195 Normal NVM Subsystem Shutdown event: Not Supported 00:22:38.195 Zone Descriptor Change Notices: Not Supported 00:22:38.195 Discovery Log Change Notices: Supported 00:22:38.195 Controller Attributes 00:22:38.195 128-bit Host Identifier: Not Supported 00:22:38.195 Non-Operational Permissive Mode: Not Supported 00:22:38.195 NVM Sets: Not Supported 00:22:38.196 Read Recovery Levels: Not Supported 00:22:38.196 Endurance Groups: Not Supported 00:22:38.196 Predictable Latency Mode: Not Supported 00:22:38.196 Traffic Based Keep ALive: Not Supported 00:22:38.196 Namespace Granularity: Not Supported 00:22:38.196 SQ Associations: Not Supported 00:22:38.196 UUID List: Not Supported 00:22:38.196 Multi-Domain Subsystem: Not Supported 00:22:38.196 Fixed Capacity Management: Not Supported 00:22:38.196 Variable Capacity Management: Not Supported 00:22:38.196 Delete Endurance Group: Not Supported 00:22:38.196 Delete NVM Set: Not Supported 00:22:38.196 Extended LBA Formats Supported: Not Supported 00:22:38.196 Flexible Data Placement Supported: Not Supported 00:22:38.196 00:22:38.196 Controller Memory Buffer Support 00:22:38.196 ================================ 00:22:38.196 Supported: No 00:22:38.196 00:22:38.196 Persistent Memory Region Support 00:22:38.196 ================================ 00:22:38.196 Supported: No 00:22:38.196 00:22:38.196 Admin Command Set Attributes 00:22:38.196 ============================ 00:22:38.196 Security Send/Receive: Not Supported 00:22:38.196 Format NVM: Not Supported 00:22:38.196 Firmware Activate/Download: Not Supported 00:22:38.196 Namespace Management: Not Supported 00:22:38.196 Device Self-Test: Not Supported 00:22:38.196 Directives: Not Supported 00:22:38.196 NVMe-MI: Not Supported 00:22:38.196 Virtualization Management: Not Supported 00:22:38.196 Doorbell Buffer Config: Not Supported 00:22:38.196 Get LBA Status Capability: Not Supported 00:22:38.196 Command & Feature Lockdown Capability: Not Supported 00:22:38.196 Abort Command Limit: 1 00:22:38.196 Async Event Request Limit: 4 00:22:38.196 Number of Firmware Slots: N/A 00:22:38.196 Firmware Slot 1 Read-Only: N/A 00:22:38.196 Firmware Activation Without Reset: N/A 00:22:38.196 Multiple Update Detection Support: N/A 00:22:38.196 Firmware Update Granularity: No Information Provided 00:22:38.196 Per-Namespace SMART Log: No 00:22:38.196 Asymmetric Namespace Access Log Page: Not Supported 00:22:38.196 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:38.196 Command Effects Log Page: Not Supported 00:22:38.196 Get Log Page Extended Data: Supported 00:22:38.196 Telemetry Log Pages: Not Supported 00:22:38.196 Persistent Event Log Pages: Not Supported 00:22:38.196 Supported Log Pages Log Page: May Support 00:22:38.196 Commands Supported & Effects Log Page: Not Supported 00:22:38.196 Feature Identifiers & Effects Log Page:May Support 00:22:38.196 NVMe-MI Commands & Effects Log Page: May Support 00:22:38.196 Data Area 4 for Telemetry Log: Not Supported 00:22:38.196 Error Log Page Entries Supported: 128 00:22:38.196 Keep Alive: Not Supported 00:22:38.196 00:22:38.196 NVM Command Set Attributes 00:22:38.196 ========================== 00:22:38.196 Submission Queue Entry Size 00:22:38.196 Max: 1 00:22:38.196 Min: 1 00:22:38.196 Completion Queue Entry Size 00:22:38.196 Max: 1 00:22:38.196 Min: 1 00:22:38.196 Number of Namespaces: 0 00:22:38.196 Compare Command: Not Supported 00:22:38.196 Write Uncorrectable Command: Not Supported 00:22:38.196 Dataset Management Command: Not Supported 00:22:38.196 Write Zeroes Command: Not Supported 00:22:38.196 Set Features Save Field: Not Supported 00:22:38.196 Reservations: Not Supported 00:22:38.196 Timestamp: Not Supported 00:22:38.196 Copy: Not Supported 00:22:38.196 Volatile Write Cache: Not Present 00:22:38.196 Atomic Write Unit (Normal): 1 00:22:38.196 Atomic Write Unit (PFail): 1 00:22:38.196 Atomic Compare & Write Unit: 1 00:22:38.196 Fused Compare & Write: Supported 00:22:38.196 Scatter-Gather List 00:22:38.196 SGL Command Set: Supported 00:22:38.196 SGL Keyed: Supported 00:22:38.196 SGL Bit Bucket Descriptor: Not Supported 00:22:38.196 SGL Metadata Pointer: Not Supported 00:22:38.196 Oversized SGL: Not Supported 00:22:38.196 SGL Metadata Address: Not Supported 00:22:38.196 SGL Offset: Supported 00:22:38.196 Transport SGL Data Block: Not Supported 00:22:38.196 Replay Protected Memory Block: Not Supported 00:22:38.196 00:22:38.196 Firmware Slot Information 00:22:38.196 ========================= 00:22:38.196 Active slot: 0 00:22:38.196 00:22:38.196 00:22:38.196 Error Log 00:22:38.196 ========= 00:22:38.196 00:22:38.196 Active Namespaces 00:22:38.196 ================= 00:22:38.196 Discovery Log Page 00:22:38.196 ================== 00:22:38.196 Generation Counter: 2 00:22:38.196 Number of Records: 2 00:22:38.196 Record Format: 0 00:22:38.196 00:22:38.196 Discovery Log Entry 0 00:22:38.196 ---------------------- 00:22:38.196 Transport Type: 3 (TCP) 00:22:38.196 Address Family: 1 (IPv4) 00:22:38.196 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:38.196 Entry Flags: 00:22:38.196 Duplicate Returned Information: 1 00:22:38.196 Explicit Persistent Connection Support for Discovery: 1 00:22:38.196 Transport Requirements: 00:22:38.196 Secure Channel: Not Required 00:22:38.196 Port ID: 0 (0x0000) 00:22:38.196 Controller ID: 65535 (0xffff) 00:22:38.196 Admin Max SQ Size: 128 00:22:38.196 Transport Service Identifier: 4420 00:22:38.196 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:38.196 Transport Address: 10.0.0.2 00:22:38.196 Discovery Log Entry 1 00:22:38.196 ---------------------- 00:22:38.196 Transport Type: 3 (TCP) 00:22:38.196 Address Family: 1 (IPv4) 00:22:38.196 Subsystem Type: 2 (NVM Subsystem) 00:22:38.196 Entry Flags: 00:22:38.196 Duplicate Returned Information: 0 00:22:38.196 Explicit Persistent Connection Support for Discovery: 0 00:22:38.196 Transport Requirements: 00:22:38.196 Secure Channel: Not Required 00:22:38.196 Port ID: 0 (0x0000) 00:22:38.196 Controller ID: 65535 (0xffff) 00:22:38.196 Admin Max SQ Size: 128 00:22:38.196 Transport Service Identifier: 4420 00:22:38.196 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:38.196 Transport Address: 10.0.0.2 [2024-07-15 16:04:04.899062] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:38.196 [2024-07-15 16:04:04.899085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f83c0) on tqpair=0x1298540 00:22:38.196 [2024-07-15 16:04:04.899099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.196 [2024-07-15 16:04:04.899108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8540) on tqpair=0x1298540 00:22:38.196 [2024-07-15 16:04:04.899116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.196 [2024-07-15 16:04:04.899124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f86c0) on tqpair=0x1298540 00:22:38.196 [2024-07-15 16:04:04.899132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.196 [2024-07-15 16:04:04.899140] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.196 [2024-07-15 16:04:04.899148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.196 [2024-07-15 16:04:04.899166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.196 [2024-07-15 16:04:04.899176] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.196 [2024-07-15 16:04:04.899182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.196 [2024-07-15 16:04:04.899193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.196 [2024-07-15 16:04:04.899219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.196 [2024-07-15 16:04:04.899455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.196 [2024-07-15 16:04:04.899468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.196 [2024-07-15 16:04:04.899475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.196 [2024-07-15 16:04:04.899481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.196 [2024-07-15 16:04:04.899495] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.196 [2024-07-15 16:04:04.899503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.196 [2024-07-15 16:04:04.899509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.196 [2024-07-15 16:04:04.899520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.196 [2024-07-15 16:04:04.899547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.196 [2024-07-15 16:04:04.899707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.196 [2024-07-15 16:04:04.899723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.196 [2024-07-15 16:04:04.899729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.196 [2024-07-15 16:04:04.899736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.196 [2024-07-15 16:04:04.899746] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:38.196 [2024-07-15 16:04:04.899756] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:38.196 [2024-07-15 16:04:04.899772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.196 [2024-07-15 16:04:04.899781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.196 [2024-07-15 16:04:04.899792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.196 [2024-07-15 16:04:04.899803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.196 [2024-07-15 16:04:04.899824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.197 [2024-07-15 16:04:04.899989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.197 [2024-07-15 16:04:04.900003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.197 [2024-07-15 16:04:04.900010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.197 [2024-07-15 16:04:04.900034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.197 [2024-07-15 16:04:04.900060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.197 [2024-07-15 16:04:04.900081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.197 [2024-07-15 16:04:04.900212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.197 [2024-07-15 16:04:04.900228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.197 [2024-07-15 16:04:04.900235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.197 [2024-07-15 16:04:04.900258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.197 [2024-07-15 16:04:04.900285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.197 [2024-07-15 16:04:04.900305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.197 [2024-07-15 16:04:04.900430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.197 [2024-07-15 16:04:04.900445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.197 [2024-07-15 16:04:04.900452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.197 [2024-07-15 16:04:04.900475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.197 [2024-07-15 16:04:04.900501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.197 [2024-07-15 16:04:04.900522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.197 [2024-07-15 16:04:04.900655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.197 [2024-07-15 16:04:04.900667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.197 [2024-07-15 16:04:04.900674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900681] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.197 [2024-07-15 16:04:04.900696] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.197 [2024-07-15 16:04:04.900726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.197 [2024-07-15 16:04:04.900747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.197 [2024-07-15 16:04:04.900887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.197 [2024-07-15 16:04:04.900901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.197 [2024-07-15 16:04:04.900908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.197 [2024-07-15 16:04:04.900930] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.900946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.197 [2024-07-15 16:04:04.900957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.197 [2024-07-15 16:04:04.900977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.197 [2024-07-15 16:04:04.901100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.197 [2024-07-15 16:04:04.901113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.197 [2024-07-15 16:04:04.901120] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.901126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.197 [2024-07-15 16:04:04.901142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.901151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.901158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.197 [2024-07-15 16:04:04.901168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.197 [2024-07-15 16:04:04.901188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.197 [2024-07-15 16:04:04.901314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.197 [2024-07-15 16:04:04.901329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.197 [2024-07-15 16:04:04.901336] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.901343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.197 [2024-07-15 16:04:04.901359] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.901369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.901375] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.197 [2024-07-15 16:04:04.901386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.197 [2024-07-15 16:04:04.901406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.197 [2024-07-15 16:04:04.901542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.197 [2024-07-15 16:04:04.901558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.197 [2024-07-15 16:04:04.901564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.901571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.197 [2024-07-15 16:04:04.901587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.901597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.901603] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.197 [2024-07-15 16:04:04.901614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.197 [2024-07-15 16:04:04.901639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.197 [2024-07-15 16:04:04.901761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.197 [2024-07-15 16:04:04.901777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.197 [2024-07-15 16:04:04.901783] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.901790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.197 [2024-07-15 16:04:04.901807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.901816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.901823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.197 [2024-07-15 16:04:04.901833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.197 [2024-07-15 16:04:04.901853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.197 [2024-07-15 16:04:04.905902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.197 [2024-07-15 16:04:04.905920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.197 [2024-07-15 16:04:04.905927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.905934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.197 [2024-07-15 16:04:04.905952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.905962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.905969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1298540) 00:22:38.197 [2024-07-15 16:04:04.905979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.197 [2024-07-15 16:04:04.906001] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f8840, cid 3, qid 0 00:22:38.197 [2024-07-15 16:04:04.906159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.197 [2024-07-15 16:04:04.906174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.197 [2024-07-15 16:04:04.906181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.197 [2024-07-15 16:04:04.906188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f8840) on tqpair=0x1298540 00:22:38.197 [2024-07-15 16:04:04.906202] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:38.197 00:22:38.197 16:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:38.197 [2024-07-15 16:04:04.941729] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:38.197 [2024-07-15 16:04:04.941772] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211398 ] 00:22:38.197 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.197 [2024-07-15 16:04:04.976683] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:38.197 [2024-07-15 16:04:04.976734] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:38.197 [2024-07-15 16:04:04.976743] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:38.197 [2024-07-15 16:04:04.976760] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:38.197 [2024-07-15 16:04:04.976770] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:38.197 [2024-07-15 16:04:04.977076] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:38.197 [2024-07-15 16:04:04.977116] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x185a540 0 00:22:38.197 [2024-07-15 16:04:04.983894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:38.197 [2024-07-15 16:04:04.983913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:38.198 [2024-07-15 16:04:04.983932] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:38.198 [2024-07-15 16:04:04.983938] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:38.198 [2024-07-15 16:04:04.983977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.983989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.983997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185a540) 00:22:38.198 [2024-07-15 16:04:04.984011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:38.198 [2024-07-15 16:04:04.984037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba3c0, cid 0, qid 0 00:22:38.198 [2024-07-15 16:04:04.991891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.198 [2024-07-15 16:04:04.991910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.198 [2024-07-15 16:04:04.991917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.991924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba3c0) on tqpair=0x185a540 00:22:38.198 [2024-07-15 16:04:04.991943] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:38.198 [2024-07-15 16:04:04.991955] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:38.198 [2024-07-15 16:04:04.991964] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:38.198 [2024-07-15 16:04:04.991982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.991991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.991997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185a540) 00:22:38.198 [2024-07-15 16:04:04.992008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.198 [2024-07-15 16:04:04.992032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba3c0, cid 0, qid 0 00:22:38.198 [2024-07-15 16:04:04.992199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.198 [2024-07-15 16:04:04.992212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.198 [2024-07-15 16:04:04.992219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.992226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba3c0) on tqpair=0x185a540 00:22:38.198 [2024-07-15 16:04:04.992234] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:38.198 [2024-07-15 16:04:04.992247] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:38.198 [2024-07-15 16:04:04.992259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.992266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.992273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185a540) 00:22:38.198 [2024-07-15 16:04:04.992283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.198 [2024-07-15 16:04:04.992304] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba3c0, cid 0, qid 0 00:22:38.198 [2024-07-15 16:04:04.992424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.198 [2024-07-15 16:04:04.992437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.198 [2024-07-15 16:04:04.992444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.992451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba3c0) on tqpair=0x185a540 00:22:38.198 [2024-07-15 16:04:04.992459] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:38.198 [2024-07-15 16:04:04.992472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:38.198 [2024-07-15 16:04:04.992484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.992492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.992498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185a540) 00:22:38.198 [2024-07-15 16:04:04.992508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.198 [2024-07-15 16:04:04.992528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba3c0, cid 0, qid 0 00:22:38.198 [2024-07-15 16:04:04.992661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.198 [2024-07-15 16:04:04.992677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.198 [2024-07-15 16:04:04.992684] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.992690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba3c0) on tqpair=0x185a540 00:22:38.198 [2024-07-15 16:04:04.992699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:38.198 [2024-07-15 16:04:04.992716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.992725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.992731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185a540) 00:22:38.198 [2024-07-15 16:04:04.992742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.198 [2024-07-15 16:04:04.992763] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba3c0, cid 0, qid 0 00:22:38.198 [2024-07-15 16:04:04.992906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.198 [2024-07-15 16:04:04.992922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.198 [2024-07-15 16:04:04.992929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.992936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba3c0) on tqpair=0x185a540 00:22:38.198 [2024-07-15 16:04:04.992944] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:38.198 [2024-07-15 16:04:04.992953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:38.198 [2024-07-15 16:04:04.992966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:38.198 [2024-07-15 16:04:04.993076] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:38.198 [2024-07-15 16:04:04.993083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:38.198 [2024-07-15 16:04:04.993094] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.993102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.993108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185a540) 00:22:38.198 [2024-07-15 16:04:04.993122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.198 [2024-07-15 16:04:04.993144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba3c0, cid 0, qid 0 00:22:38.198 [2024-07-15 16:04:04.993299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.198 [2024-07-15 16:04:04.993315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.198 [2024-07-15 16:04:04.993322] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.993328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba3c0) on tqpair=0x185a540 00:22:38.198 [2024-07-15 16:04:04.993337] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:38.198 [2024-07-15 16:04:04.993353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.993362] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.993369] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185a540) 00:22:38.198 [2024-07-15 16:04:04.993379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.198 [2024-07-15 16:04:04.993400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba3c0, cid 0, qid 0 00:22:38.198 [2024-07-15 16:04:04.993523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.198 [2024-07-15 16:04:04.993536] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.198 [2024-07-15 16:04:04.993543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.993549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba3c0) on tqpair=0x185a540 00:22:38.198 [2024-07-15 16:04:04.993556] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:38.198 [2024-07-15 16:04:04.993565] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:38.198 [2024-07-15 16:04:04.993578] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:38.198 [2024-07-15 16:04:04.993593] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:38.198 [2024-07-15 16:04:04.993607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.993615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185a540) 00:22:38.198 [2024-07-15 16:04:04.993625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.198 [2024-07-15 16:04:04.993646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba3c0, cid 0, qid 0 00:22:38.198 [2024-07-15 16:04:04.993812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.198 [2024-07-15 16:04:04.993827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.198 [2024-07-15 16:04:04.993834] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.993840] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185a540): datao=0, datal=4096, cccid=0 00:22:38.198 [2024-07-15 16:04:04.993848] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ba3c0) on tqpair(0x185a540): expected_datao=0, payload_size=4096 00:22:38.198 [2024-07-15 16:04:04.993855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.993873] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:04.993892] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:05.034891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.198 [2024-07-15 16:04:05.034910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.198 [2024-07-15 16:04:05.034921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.198 [2024-07-15 16:04:05.034928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba3c0) on tqpair=0x185a540 00:22:38.198 [2024-07-15 16:04:05.034940] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:38.198 [2024-07-15 16:04:05.034952] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:38.198 [2024-07-15 16:04:05.034960] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:38.199 [2024-07-15 16:04:05.034967] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:38.199 [2024-07-15 16:04:05.034974] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:38.199 [2024-07-15 16:04:05.034982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.034997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.035024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185a540) 00:22:38.199 [2024-07-15 16:04:05.035050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.199 [2024-07-15 16:04:05.035073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba3c0, cid 0, qid 0 00:22:38.199 [2024-07-15 16:04:05.035235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.199 [2024-07-15 16:04:05.035247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.199 [2024-07-15 16:04:05.035254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba3c0) on tqpair=0x185a540 00:22:38.199 [2024-07-15 16:04:05.035271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185a540) 00:22:38.199 [2024-07-15 16:04:05.035295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.199 [2024-07-15 16:04:05.035305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x185a540) 00:22:38.199 [2024-07-15 16:04:05.035326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.199 [2024-07-15 16:04:05.035336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x185a540) 00:22:38.199 [2024-07-15 16:04:05.035358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.199 [2024-07-15 16:04:05.035367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.199 [2024-07-15 16:04:05.035404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.199 [2024-07-15 16:04:05.035416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.035435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.035448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x185a540) 00:22:38.199 [2024-07-15 16:04:05.035465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.199 [2024-07-15 16:04:05.035486] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba3c0, cid 0, qid 0 00:22:38.199 [2024-07-15 16:04:05.035511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba540, cid 1, qid 0 00:22:38.199 [2024-07-15 16:04:05.035520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba6c0, cid 2, qid 0 00:22:38.199 [2024-07-15 16:04:05.035527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.199 [2024-07-15 16:04:05.035535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba9c0, cid 4, qid 0 00:22:38.199 [2024-07-15 16:04:05.035729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.199 [2024-07-15 16:04:05.035745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.199 [2024-07-15 16:04:05.035752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035758] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba9c0) on tqpair=0x185a540 00:22:38.199 [2024-07-15 16:04:05.035766] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:38.199 [2024-07-15 16:04:05.035775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.035789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.035801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.035827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.035841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x185a540) 00:22:38.199 [2024-07-15 16:04:05.035851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.199 [2024-07-15 16:04:05.035872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba9c0, cid 4, qid 0 00:22:38.199 [2024-07-15 16:04:05.036042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.199 [2024-07-15 16:04:05.036055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.199 [2024-07-15 16:04:05.036062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.036069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba9c0) on tqpair=0x185a540 00:22:38.199 [2024-07-15 16:04:05.036134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.036153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.036169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.036176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x185a540) 00:22:38.199 [2024-07-15 16:04:05.036187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.199 [2024-07-15 16:04:05.036227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba9c0, cid 4, qid 0 00:22:38.199 [2024-07-15 16:04:05.036436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.199 [2024-07-15 16:04:05.036452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.199 [2024-07-15 16:04:05.036459] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.036466] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185a540): datao=0, datal=4096, cccid=4 00:22:38.199 [2024-07-15 16:04:05.036473] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ba9c0) on tqpair(0x185a540): expected_datao=0, payload_size=4096 00:22:38.199 [2024-07-15 16:04:05.036481] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.036498] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.036507] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.077017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.199 [2024-07-15 16:04:05.077036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.199 [2024-07-15 16:04:05.077043] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.077050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba9c0) on tqpair=0x185a540 00:22:38.199 [2024-07-15 16:04:05.077075] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:38.199 [2024-07-15 16:04:05.077093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.077111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.077126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.077134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x185a540) 00:22:38.199 [2024-07-15 16:04:05.077145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.199 [2024-07-15 16:04:05.077168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba9c0, cid 4, qid 0 00:22:38.199 [2024-07-15 16:04:05.077319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.199 [2024-07-15 16:04:05.077332] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.199 [2024-07-15 16:04:05.077339] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.077346] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185a540): datao=0, datal=4096, cccid=4 00:22:38.199 [2024-07-15 16:04:05.077353] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ba9c0) on tqpair(0x185a540): expected_datao=0, payload_size=4096 00:22:38.199 [2024-07-15 16:04:05.077361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.077377] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.077386] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.121888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.199 [2024-07-15 16:04:05.121908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.199 [2024-07-15 16:04:05.121916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.121923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba9c0) on tqpair=0x185a540 00:22:38.199 [2024-07-15 16:04:05.121946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.121967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:38.199 [2024-07-15 16:04:05.121986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.121995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x185a540) 00:22:38.199 [2024-07-15 16:04:05.122007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.199 [2024-07-15 16:04:05.122030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba9c0, cid 4, qid 0 00:22:38.199 [2024-07-15 16:04:05.122202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.199 [2024-07-15 16:04:05.122215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.199 [2024-07-15 16:04:05.122221] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.122228] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185a540): datao=0, datal=4096, cccid=4 00:22:38.199 [2024-07-15 16:04:05.122235] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ba9c0) on tqpair(0x185a540): expected_datao=0, payload_size=4096 00:22:38.199 [2024-07-15 16:04:05.122243] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.199 [2024-07-15 16:04:05.122259] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.200 [2024-07-15 16:04:05.122268] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.163020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.458 [2024-07-15 16:04:05.163040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.458 [2024-07-15 16:04:05.163047] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.163054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba9c0) on tqpair=0x185a540 00:22:38.458 [2024-07-15 16:04:05.163068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:38.458 [2024-07-15 16:04:05.163085] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:38.458 [2024-07-15 16:04:05.163103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:38.458 [2024-07-15 16:04:05.163114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:38.458 [2024-07-15 16:04:05.163123] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:38.458 [2024-07-15 16:04:05.163132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:38.458 [2024-07-15 16:04:05.163141] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:38.458 [2024-07-15 16:04:05.163149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:38.458 [2024-07-15 16:04:05.163157] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:38.458 [2024-07-15 16:04:05.163176] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.163185] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x185a540) 00:22:38.458 [2024-07-15 16:04:05.163197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-15 16:04:05.163208] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.163216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.163222] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x185a540) 00:22:38.458 [2024-07-15 16:04:05.163231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.458 [2024-07-15 16:04:05.163261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba9c0, cid 4, qid 0 00:22:38.458 [2024-07-15 16:04:05.163274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bab40, cid 5, qid 0 00:22:38.458 [2024-07-15 16:04:05.163411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.458 [2024-07-15 16:04:05.163423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.458 [2024-07-15 16:04:05.163430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.163437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba9c0) on tqpair=0x185a540 00:22:38.458 [2024-07-15 16:04:05.163447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.458 [2024-07-15 16:04:05.163457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.458 [2024-07-15 16:04:05.163463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.163470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bab40) on tqpair=0x185a540 00:22:38.458 [2024-07-15 16:04:05.163486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.163495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x185a540) 00:22:38.458 [2024-07-15 16:04:05.163505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-15 16:04:05.163526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bab40, cid 5, qid 0 00:22:38.458 [2024-07-15 16:04:05.163681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.458 [2024-07-15 16:04:05.163696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.458 [2024-07-15 16:04:05.163703] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.163710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bab40) on tqpair=0x185a540 00:22:38.458 [2024-07-15 16:04:05.163726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.163735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x185a540) 00:22:38.458 [2024-07-15 16:04:05.163746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-15 16:04:05.163766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bab40, cid 5, qid 0 00:22:38.458 [2024-07-15 16:04:05.163897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.458 [2024-07-15 16:04:05.163911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.458 [2024-07-15 16:04:05.163918] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.163924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bab40) on tqpair=0x185a540 00:22:38.458 [2024-07-15 16:04:05.163940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.163949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x185a540) 00:22:38.458 [2024-07-15 16:04:05.163959] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-15 16:04:05.163980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bab40, cid 5, qid 0 00:22:38.458 [2024-07-15 16:04:05.164113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.458 [2024-07-15 16:04:05.164129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.458 [2024-07-15 16:04:05.164135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bab40) on tqpair=0x185a540 00:22:38.458 [2024-07-15 16:04:05.164167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x185a540) 00:22:38.458 [2024-07-15 16:04:05.164191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-15 16:04:05.164204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x185a540) 00:22:38.458 [2024-07-15 16:04:05.164221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-15 16:04:05.164233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x185a540) 00:22:38.458 [2024-07-15 16:04:05.164249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-15 16:04:05.164277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x185a540) 00:22:38.458 [2024-07-15 16:04:05.164294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-15 16:04:05.164315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bab40, cid 5, qid 0 00:22:38.458 [2024-07-15 16:04:05.164341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba9c0, cid 4, qid 0 00:22:38.458 [2024-07-15 16:04:05.164350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bacc0, cid 6, qid 0 00:22:38.458 [2024-07-15 16:04:05.164357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bae40, cid 7, qid 0 00:22:38.458 [2024-07-15 16:04:05.164587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.458 [2024-07-15 16:04:05.164599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.458 [2024-07-15 16:04:05.164606] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164612] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185a540): datao=0, datal=8192, cccid=5 00:22:38.458 [2024-07-15 16:04:05.164620] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bab40) on tqpair(0x185a540): expected_datao=0, payload_size=8192 00:22:38.458 [2024-07-15 16:04:05.164627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164700] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164710] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.458 [2024-07-15 16:04:05.164728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.458 [2024-07-15 16:04:05.164735] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164741] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185a540): datao=0, datal=512, cccid=4 00:22:38.458 [2024-07-15 16:04:05.164748] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ba9c0) on tqpair(0x185a540): expected_datao=0, payload_size=512 00:22:38.458 [2024-07-15 16:04:05.164756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164765] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164772] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.458 [2024-07-15 16:04:05.164789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.458 [2024-07-15 16:04:05.164795] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164802] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185a540): datao=0, datal=512, cccid=6 00:22:38.458 [2024-07-15 16:04:05.164813] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bacc0) on tqpair(0x185a540): expected_datao=0, payload_size=512 00:22:38.458 [2024-07-15 16:04:05.164820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164830] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164837] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.458 [2024-07-15 16:04:05.164854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.458 [2024-07-15 16:04:05.164861] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.164867] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185a540): datao=0, datal=4096, cccid=7 00:22:38.458 [2024-07-15 16:04:05.164874] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bae40) on tqpair(0x185a540): expected_datao=0, payload_size=4096 00:22:38.458 [2024-07-15 16:04:05.168893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.168906] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.168913] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.168926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.458 [2024-07-15 16:04:05.168936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.458 [2024-07-15 16:04:05.168943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.168949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bab40) on tqpair=0x185a540 00:22:38.458 [2024-07-15 16:04:05.168968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.458 [2024-07-15 16:04:05.168979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.458 [2024-07-15 16:04:05.168986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.458 [2024-07-15 16:04:05.168993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba9c0) on tqpair=0x185a540 00:22:38.458 [2024-07-15 16:04:05.169008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.458 [2024-07-15 16:04:05.169018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.458 [2024-07-15 16:04:05.169025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.459 [2024-07-15 16:04:05.169032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bacc0) on tqpair=0x185a540 00:22:38.459 [2024-07-15 16:04:05.169042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.459 [2024-07-15 16:04:05.169052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.459 [2024-07-15 16:04:05.169059] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.459 [2024-07-15 16:04:05.169065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bae40) on tqpair=0x185a540 00:22:38.459 ===================================================== 00:22:38.459 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.459 ===================================================== 00:22:38.459 Controller Capabilities/Features 00:22:38.459 ================================ 00:22:38.459 Vendor ID: 8086 00:22:38.459 Subsystem Vendor ID: 8086 00:22:38.459 Serial Number: SPDK00000000000001 00:22:38.459 Model Number: SPDK bdev Controller 00:22:38.459 Firmware Version: 24.09 00:22:38.459 Recommended Arb Burst: 6 00:22:38.459 IEEE OUI Identifier: e4 d2 5c 00:22:38.459 Multi-path I/O 00:22:38.459 May have multiple subsystem ports: Yes 00:22:38.459 May have multiple controllers: Yes 00:22:38.459 Associated with SR-IOV VF: No 00:22:38.459 Max Data Transfer Size: 131072 00:22:38.459 Max Number of Namespaces: 32 00:22:38.459 Max Number of I/O Queues: 127 00:22:38.459 NVMe Specification Version (VS): 1.3 00:22:38.459 NVMe Specification Version (Identify): 1.3 00:22:38.459 Maximum Queue Entries: 128 00:22:38.459 Contiguous Queues Required: Yes 00:22:38.459 Arbitration Mechanisms Supported 00:22:38.459 Weighted Round Robin: Not Supported 00:22:38.459 Vendor Specific: Not Supported 00:22:38.459 Reset Timeout: 15000 ms 00:22:38.459 Doorbell Stride: 4 bytes 00:22:38.459 NVM Subsystem Reset: Not Supported 00:22:38.459 Command Sets Supported 00:22:38.459 NVM Command Set: Supported 00:22:38.459 Boot Partition: Not Supported 00:22:38.459 Memory Page Size Minimum: 4096 bytes 00:22:38.459 Memory Page Size Maximum: 4096 bytes 00:22:38.459 Persistent Memory Region: Not Supported 00:22:38.459 Optional Asynchronous Events Supported 00:22:38.459 Namespace Attribute Notices: Supported 00:22:38.459 Firmware Activation Notices: Not Supported 00:22:38.459 ANA Change Notices: Not Supported 00:22:38.459 PLE Aggregate Log Change Notices: Not Supported 00:22:38.459 LBA Status Info Alert Notices: Not Supported 00:22:38.459 EGE Aggregate Log Change Notices: Not Supported 00:22:38.459 Normal NVM Subsystem Shutdown event: Not Supported 00:22:38.459 Zone Descriptor Change Notices: Not Supported 00:22:38.459 Discovery Log Change Notices: Not Supported 00:22:38.459 Controller Attributes 00:22:38.459 128-bit Host Identifier: Supported 00:22:38.459 Non-Operational Permissive Mode: Not Supported 00:22:38.459 NVM Sets: Not Supported 00:22:38.459 Read Recovery Levels: Not Supported 00:22:38.459 Endurance Groups: Not Supported 00:22:38.459 Predictable Latency Mode: Not Supported 00:22:38.459 Traffic Based Keep ALive: Not Supported 00:22:38.459 Namespace Granularity: Not Supported 00:22:38.459 SQ Associations: Not Supported 00:22:38.459 UUID List: Not Supported 00:22:38.459 Multi-Domain Subsystem: Not Supported 00:22:38.459 Fixed Capacity Management: Not Supported 00:22:38.459 Variable Capacity Management: Not Supported 00:22:38.459 Delete Endurance Group: Not Supported 00:22:38.459 Delete NVM Set: Not Supported 00:22:38.459 Extended LBA Formats Supported: Not Supported 00:22:38.459 Flexible Data Placement Supported: Not Supported 00:22:38.459 00:22:38.459 Controller Memory Buffer Support 00:22:38.459 ================================ 00:22:38.459 Supported: No 00:22:38.459 00:22:38.459 Persistent Memory Region Support 00:22:38.459 ================================ 00:22:38.459 Supported: No 00:22:38.459 00:22:38.459 Admin Command Set Attributes 00:22:38.459 ============================ 00:22:38.459 Security Send/Receive: Not Supported 00:22:38.459 Format NVM: Not Supported 00:22:38.459 Firmware Activate/Download: Not Supported 00:22:38.459 Namespace Management: Not Supported 00:22:38.459 Device Self-Test: Not Supported 00:22:38.459 Directives: Not Supported 00:22:38.459 NVMe-MI: Not Supported 00:22:38.459 Virtualization Management: Not Supported 00:22:38.459 Doorbell Buffer Config: Not Supported 00:22:38.459 Get LBA Status Capability: Not Supported 00:22:38.459 Command & Feature Lockdown Capability: Not Supported 00:22:38.459 Abort Command Limit: 4 00:22:38.459 Async Event Request Limit: 4 00:22:38.459 Number of Firmware Slots: N/A 00:22:38.459 Firmware Slot 1 Read-Only: N/A 00:22:38.459 Firmware Activation Without Reset: N/A 00:22:38.459 Multiple Update Detection Support: N/A 00:22:38.459 Firmware Update Granularity: No Information Provided 00:22:38.459 Per-Namespace SMART Log: No 00:22:38.459 Asymmetric Namespace Access Log Page: Not Supported 00:22:38.459 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:38.459 Command Effects Log Page: Supported 00:22:38.459 Get Log Page Extended Data: Supported 00:22:38.459 Telemetry Log Pages: Not Supported 00:22:38.459 Persistent Event Log Pages: Not Supported 00:22:38.459 Supported Log Pages Log Page: May Support 00:22:38.459 Commands Supported & Effects Log Page: Not Supported 00:22:38.459 Feature Identifiers & Effects Log Page:May Support 00:22:38.459 NVMe-MI Commands & Effects Log Page: May Support 00:22:38.459 Data Area 4 for Telemetry Log: Not Supported 00:22:38.459 Error Log Page Entries Supported: 128 00:22:38.459 Keep Alive: Supported 00:22:38.459 Keep Alive Granularity: 10000 ms 00:22:38.459 00:22:38.459 NVM Command Set Attributes 00:22:38.459 ========================== 00:22:38.459 Submission Queue Entry Size 00:22:38.459 Max: 64 00:22:38.459 Min: 64 00:22:38.459 Completion Queue Entry Size 00:22:38.459 Max: 16 00:22:38.459 Min: 16 00:22:38.459 Number of Namespaces: 32 00:22:38.459 Compare Command: Supported 00:22:38.459 Write Uncorrectable Command: Not Supported 00:22:38.459 Dataset Management Command: Supported 00:22:38.459 Write Zeroes Command: Supported 00:22:38.459 Set Features Save Field: Not Supported 00:22:38.459 Reservations: Supported 00:22:38.459 Timestamp: Not Supported 00:22:38.459 Copy: Supported 00:22:38.459 Volatile Write Cache: Present 00:22:38.459 Atomic Write Unit (Normal): 1 00:22:38.459 Atomic Write Unit (PFail): 1 00:22:38.459 Atomic Compare & Write Unit: 1 00:22:38.459 Fused Compare & Write: Supported 00:22:38.459 Scatter-Gather List 00:22:38.459 SGL Command Set: Supported 00:22:38.459 SGL Keyed: Supported 00:22:38.459 SGL Bit Bucket Descriptor: Not Supported 00:22:38.459 SGL Metadata Pointer: Not Supported 00:22:38.459 Oversized SGL: Not Supported 00:22:38.459 SGL Metadata Address: Not Supported 00:22:38.459 SGL Offset: Supported 00:22:38.459 Transport SGL Data Block: Not Supported 00:22:38.459 Replay Protected Memory Block: Not Supported 00:22:38.459 00:22:38.459 Firmware Slot Information 00:22:38.459 ========================= 00:22:38.459 Active slot: 1 00:22:38.459 Slot 1 Firmware Revision: 24.09 00:22:38.459 00:22:38.459 00:22:38.459 Commands Supported and Effects 00:22:38.459 ============================== 00:22:38.459 Admin Commands 00:22:38.459 -------------- 00:22:38.459 Get Log Page (02h): Supported 00:22:38.459 Identify (06h): Supported 00:22:38.459 Abort (08h): Supported 00:22:38.459 Set Features (09h): Supported 00:22:38.459 Get Features (0Ah): Supported 00:22:38.459 Asynchronous Event Request (0Ch): Supported 00:22:38.459 Keep Alive (18h): Supported 00:22:38.459 I/O Commands 00:22:38.459 ------------ 00:22:38.459 Flush (00h): Supported LBA-Change 00:22:38.459 Write (01h): Supported LBA-Change 00:22:38.459 Read (02h): Supported 00:22:38.459 Compare (05h): Supported 00:22:38.459 Write Zeroes (08h): Supported LBA-Change 00:22:38.459 Dataset Management (09h): Supported LBA-Change 00:22:38.459 Copy (19h): Supported LBA-Change 00:22:38.459 00:22:38.459 Error Log 00:22:38.459 ========= 00:22:38.459 00:22:38.459 Arbitration 00:22:38.459 =========== 00:22:38.459 Arbitration Burst: 1 00:22:38.459 00:22:38.459 Power Management 00:22:38.459 ================ 00:22:38.459 Number of Power States: 1 00:22:38.459 Current Power State: Power State #0 00:22:38.459 Power State #0: 00:22:38.459 Max Power: 0.00 W 00:22:38.459 Non-Operational State: Operational 00:22:38.459 Entry Latency: Not Reported 00:22:38.459 Exit Latency: Not Reported 00:22:38.459 Relative Read Throughput: 0 00:22:38.459 Relative Read Latency: 0 00:22:38.459 Relative Write Throughput: 0 00:22:38.459 Relative Write Latency: 0 00:22:38.459 Idle Power: Not Reported 00:22:38.459 Active Power: Not Reported 00:22:38.459 Non-Operational Permissive Mode: Not Supported 00:22:38.459 00:22:38.459 Health Information 00:22:38.459 ================== 00:22:38.459 Critical Warnings: 00:22:38.459 Available Spare Space: OK 00:22:38.459 Temperature: OK 00:22:38.459 Device Reliability: OK 00:22:38.459 Read Only: No 00:22:38.459 Volatile Memory Backup: OK 00:22:38.459 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:38.459 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:38.459 Available Spare: 0% 00:22:38.459 Available Spare Threshold: 0% 00:22:38.460 Life Percentage Used:[2024-07-15 16:04:05.169194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.169206] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x185a540) 00:22:38.460 [2024-07-15 16:04:05.169217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.460 [2024-07-15 16:04:05.169240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bae40, cid 7, qid 0 00:22:38.460 [2024-07-15 16:04:05.169422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.460 [2024-07-15 16:04:05.169435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.460 [2024-07-15 16:04:05.169442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.169448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bae40) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.169493] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:38.460 [2024-07-15 16:04:05.169513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba3c0) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.169526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.460 [2024-07-15 16:04:05.169536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba540) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.169544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.460 [2024-07-15 16:04:05.169552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba6c0) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.169574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.460 [2024-07-15 16:04:05.169582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.169589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.460 [2024-07-15 16:04:05.169601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.169609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.169615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.460 [2024-07-15 16:04:05.169625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.460 [2024-07-15 16:04:05.169647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.460 [2024-07-15 16:04:05.172888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.460 [2024-07-15 16:04:05.172905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.460 [2024-07-15 16:04:05.172912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.172918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.172930] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.172938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.172944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.460 [2024-07-15 16:04:05.172954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.460 [2024-07-15 16:04:05.172996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.460 [2024-07-15 16:04:05.173171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.460 [2024-07-15 16:04:05.173187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.460 [2024-07-15 16:04:05.173193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.173200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.173208] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:38.460 [2024-07-15 16:04:05.173216] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:38.460 [2024-07-15 16:04:05.173232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.173241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.173248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.460 [2024-07-15 16:04:05.173258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.460 [2024-07-15 16:04:05.173279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.460 [2024-07-15 16:04:05.173422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.460 [2024-07-15 16:04:05.173437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.460 [2024-07-15 16:04:05.173447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.173455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.173471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.173481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.173487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.460 [2024-07-15 16:04:05.173498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.460 [2024-07-15 16:04:05.173518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.460 [2024-07-15 16:04:05.173642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.460 [2024-07-15 16:04:05.173654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.460 [2024-07-15 16:04:05.173661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.173668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.173683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.173692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.173699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.460 [2024-07-15 16:04:05.173709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.460 [2024-07-15 16:04:05.173729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.460 [2024-07-15 16:04:05.173854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.460 [2024-07-15 16:04:05.173870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.460 [2024-07-15 16:04:05.173884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.173892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.173909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.173918] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.173925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.460 [2024-07-15 16:04:05.173935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.460 [2024-07-15 16:04:05.173956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.460 [2024-07-15 16:04:05.174080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.460 [2024-07-15 16:04:05.174092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.460 [2024-07-15 16:04:05.174099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174105] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.174121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.460 [2024-07-15 16:04:05.174147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.460 [2024-07-15 16:04:05.174167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.460 [2024-07-15 16:04:05.174290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.460 [2024-07-15 16:04:05.174305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.460 [2024-07-15 16:04:05.174312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174322] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.174339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174349] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.460 [2024-07-15 16:04:05.174366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.460 [2024-07-15 16:04:05.174386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.460 [2024-07-15 16:04:05.174503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.460 [2024-07-15 16:04:05.174515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.460 [2024-07-15 16:04:05.174522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174528] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.174544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.460 [2024-07-15 16:04:05.174569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.460 [2024-07-15 16:04:05.174590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.460 [2024-07-15 16:04:05.174709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.460 [2024-07-15 16:04:05.174721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.460 [2024-07-15 16:04:05.174728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.174750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.460 [2024-07-15 16:04:05.174776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.460 [2024-07-15 16:04:05.174796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.460 [2024-07-15 16:04:05.174918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.460 [2024-07-15 16:04:05.174931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.460 [2024-07-15 16:04:05.174938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.460 [2024-07-15 16:04:05.174961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.460 [2024-07-15 16:04:05.174977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.461 [2024-07-15 16:04:05.174987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.461 [2024-07-15 16:04:05.175007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.461 [2024-07-15 16:04:05.175127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.461 [2024-07-15 16:04:05.175139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.461 [2024-07-15 16:04:05.175146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.175153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.461 [2024-07-15 16:04:05.175172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.175182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.175188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.461 [2024-07-15 16:04:05.175199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.461 [2024-07-15 16:04:05.175219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.461 [2024-07-15 16:04:05.175337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.461 [2024-07-15 16:04:05.175349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.461 [2024-07-15 16:04:05.175356] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.175362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.461 [2024-07-15 16:04:05.175378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.175387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.175393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.461 [2024-07-15 16:04:05.175404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.461 [2024-07-15 16:04:05.175424] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.461 [2024-07-15 16:04:05.175547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.461 [2024-07-15 16:04:05.175562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.461 [2024-07-15 16:04:05.175569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.175576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.461 [2024-07-15 16:04:05.175592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.175601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.175608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.461 [2024-07-15 16:04:05.175618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.461 [2024-07-15 16:04:05.175638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.461 [2024-07-15 16:04:05.175762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.461 [2024-07-15 16:04:05.175777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.461 [2024-07-15 16:04:05.175784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.175791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.461 [2024-07-15 16:04:05.175807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.175816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.175823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.461 [2024-07-15 16:04:05.175833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.461 [2024-07-15 16:04:05.175854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.461 [2024-07-15 16:04:05.179893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.461 [2024-07-15 16:04:05.179910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.461 [2024-07-15 16:04:05.179917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.179924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.461 [2024-07-15 16:04:05.179941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.179955] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.179962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185a540) 00:22:38.461 [2024-07-15 16:04:05.179973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.461 [2024-07-15 16:04:05.179995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ba840, cid 3, qid 0 00:22:38.461 [2024-07-15 16:04:05.180161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.461 [2024-07-15 16:04:05.180173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.461 [2024-07-15 16:04:05.180179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.461 [2024-07-15 16:04:05.180186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ba840) on tqpair=0x185a540 00:22:38.461 [2024-07-15 16:04:05.180199] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:38.461 0% 00:22:38.461 Data Units Read: 0 00:22:38.461 Data Units Written: 0 00:22:38.461 Host Read Commands: 0 00:22:38.461 Host Write Commands: 0 00:22:38.461 Controller Busy Time: 0 minutes 00:22:38.461 Power Cycles: 0 00:22:38.461 Power On Hours: 0 hours 00:22:38.461 Unsafe Shutdowns: 0 00:22:38.461 Unrecoverable Media Errors: 0 00:22:38.461 Lifetime Error Log Entries: 0 00:22:38.461 Warning Temperature Time: 0 minutes 00:22:38.461 Critical Temperature Time: 0 minutes 00:22:38.461 00:22:38.461 Number of Queues 00:22:38.461 ================ 00:22:38.461 Number of I/O Submission Queues: 127 00:22:38.461 Number of I/O Completion Queues: 127 00:22:38.461 00:22:38.461 Active Namespaces 00:22:38.461 ================= 00:22:38.461 Namespace ID:1 00:22:38.461 Error Recovery Timeout: Unlimited 00:22:38.461 Command Set Identifier: NVM (00h) 00:22:38.461 Deallocate: Supported 00:22:38.461 Deallocated/Unwritten Error: Not Supported 00:22:38.461 Deallocated Read Value: Unknown 00:22:38.461 Deallocate in Write Zeroes: Not Supported 00:22:38.461 Deallocated Guard Field: 0xFFFF 00:22:38.461 Flush: Supported 00:22:38.461 Reservation: Supported 00:22:38.461 Namespace Sharing Capabilities: Multiple Controllers 00:22:38.461 Size (in LBAs): 131072 (0GiB) 00:22:38.461 Capacity (in LBAs): 131072 (0GiB) 00:22:38.461 Utilization (in LBAs): 131072 (0GiB) 00:22:38.461 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:38.461 EUI64: ABCDEF0123456789 00:22:38.461 UUID: c291e8f9-06c4-477c-87ed-6093e21207db 00:22:38.461 Thin Provisioning: Not Supported 00:22:38.461 Per-NS Atomic Units: Yes 00:22:38.461 Atomic Boundary Size (Normal): 0 00:22:38.461 Atomic Boundary Size (PFail): 0 00:22:38.461 Atomic Boundary Offset: 0 00:22:38.461 Maximum Single Source Range Length: 65535 00:22:38.461 Maximum Copy Length: 65535 00:22:38.461 Maximum Source Range Count: 1 00:22:38.461 NGUID/EUI64 Never Reused: No 00:22:38.461 Namespace Write Protected: No 00:22:38.461 Number of LBA Formats: 1 00:22:38.461 Current LBA Format: LBA Format #00 00:22:38.461 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:38.461 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:38.461 rmmod nvme_tcp 00:22:38.461 rmmod nvme_fabrics 00:22:38.461 rmmod nvme_keyring 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1211242 ']' 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1211242 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1211242 ']' 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1211242 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1211242 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1211242' 00:22:38.461 killing process with pid 1211242 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1211242 00:22:38.461 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1211242 00:22:38.721 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:38.721 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:38.721 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:38.721 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:38.721 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:38.721 16:04:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.721 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.721 16:04:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.797 16:04:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:40.797 00:22:40.797 real 0m6.092s 00:22:40.797 user 0m7.577s 00:22:40.797 sys 0m1.853s 00:22:40.797 16:04:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:40.797 16:04:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:40.797 ************************************ 00:22:40.797 END TEST nvmf_identify 00:22:40.797 ************************************ 00:22:40.797 16:04:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:40.797 16:04:07 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:40.797 16:04:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:40.797 16:04:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.797 16:04:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:40.797 ************************************ 00:22:40.797 START TEST nvmf_perf 00:22:40.797 ************************************ 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:40.797 * Looking for test storage... 00:22:40.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.797 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:41.053 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.053 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.053 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.053 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.053 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.053 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.053 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.053 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.054 16:04:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:43.008 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:43.008 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:43.008 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:43.008 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:43.009 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:43.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:22:43.009 00:22:43.009 --- 10.0.0.2 ping statistics --- 00:22:43.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.009 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:22:43.009 00:22:43.009 --- 10.0.0.1 ping statistics --- 00:22:43.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.009 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1213331 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1213331 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1213331 ']' 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.009 16:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:43.009 [2024-07-15 16:04:09.827394] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:43.009 [2024-07-15 16:04:09.827483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.009 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.009 [2024-07-15 16:04:09.895068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.269 [2024-07-15 16:04:10.015157] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.269 [2024-07-15 16:04:10.015216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.269 [2024-07-15 16:04:10.015231] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.269 [2024-07-15 16:04:10.015243] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.269 [2024-07-15 16:04:10.015271] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.269 [2024-07-15 16:04:10.015338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.269 [2024-07-15 16:04:10.015395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.269 [2024-07-15 16:04:10.015456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.269 [2024-07-15 16:04:10.015459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.203 16:04:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.203 16:04:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:44.203 16:04:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.203 16:04:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.203 16:04:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:44.203 16:04:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.203 16:04:10 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:44.203 16:04:10 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:47.494 16:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:47.494 16:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:47.494 16:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:22:47.494 16:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:47.751 16:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:47.751 16:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:22:47.751 16:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:47.751 16:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:47.751 16:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:48.008 [2024-07-15 16:04:14.705604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.008 16:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:48.265 16:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:48.265 16:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:48.522 16:04:15 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:48.522 16:04:15 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:48.779 16:04:15 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.036 [2024-07-15 16:04:15.833636] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.036 16:04:15 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:49.294 16:04:16 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:22:49.294 16:04:16 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:49.294 16:04:16 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:49.294 16:04:16 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:50.670 Initializing NVMe Controllers 00:22:50.670 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:22:50.670 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:22:50.670 Initialization complete. Launching workers. 00:22:50.670 ======================================================== 00:22:50.670 Latency(us) 00:22:50.670 Device Information : IOPS MiB/s Average min max 00:22:50.670 PCIE (0000:88:00.0) NSID 1 from core 0: 85659.63 334.61 373.20 39.25 4305.80 00:22:50.670 ======================================================== 00:22:50.670 Total : 85659.63 334.61 373.20 39.25 4305.80 00:22:50.670 00:22:50.671 16:04:17 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:50.671 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.059 Initializing NVMe Controllers 00:22:52.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:52.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:52.059 Initialization complete. Launching workers. 00:22:52.059 ======================================================== 00:22:52.059 Latency(us) 00:22:52.059 Device Information : IOPS MiB/s Average min max 00:22:52.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 109.61 0.43 9205.66 189.91 45645.95 00:22:52.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.82 0.20 19451.60 6973.22 47902.79 00:22:52.059 ======================================================== 00:22:52.059 Total : 161.43 0.63 12494.48 189.91 47902.79 00:22:52.059 00:22:52.059 16:04:18 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:52.059 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.435 Initializing NVMe Controllers 00:22:53.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:53.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:53.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:53.435 Initialization complete. Launching workers. 00:22:53.435 ======================================================== 00:22:53.435 Latency(us) 00:22:53.435 Device Information : IOPS MiB/s Average min max 00:22:53.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8422.47 32.90 3799.79 567.06 7806.05 00:22:53.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3812.63 14.89 8410.14 5193.74 15930.82 00:22:53.436 ======================================================== 00:22:53.436 Total : 12235.10 47.79 5236.44 567.06 15930.82 00:22:53.436 00:22:53.436 16:04:20 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:53.436 16:04:20 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:53.436 16:04:20 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:53.436 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.981 Initializing NVMe Controllers 00:22:55.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.981 Controller IO queue size 128, less than required. 00:22:55.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:55.981 Controller IO queue size 128, less than required. 00:22:55.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:55.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:55.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:55.981 Initialization complete. Launching workers. 00:22:55.981 ======================================================== 00:22:55.981 Latency(us) 00:22:55.981 Device Information : IOPS MiB/s Average min max 00:22:55.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 909.43 227.36 146531.28 89781.87 220298.74 00:22:55.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 596.96 149.24 218622.84 78425.71 305295.76 00:22:55.981 ======================================================== 00:22:55.981 Total : 1506.39 376.60 175099.92 78425.71 305295.76 00:22:55.981 00:22:55.981 16:04:22 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:55.981 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.981 No valid NVMe controllers or AIO or URING devices found 00:22:55.981 Initializing NVMe Controllers 00:22:55.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.981 Controller IO queue size 128, less than required. 00:22:55.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:55.981 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:55.981 Controller IO queue size 128, less than required. 00:22:55.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:55.981 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:55.981 WARNING: Some requested NVMe devices were skipped 00:22:55.981 16:04:22 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:55.981 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.514 Initializing NVMe Controllers 00:22:58.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.514 Controller IO queue size 128, less than required. 00:22:58.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.514 Controller IO queue size 128, less than required. 00:22:58.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:58.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:58.514 Initialization complete. Launching workers. 00:22:58.514 00:22:58.514 ==================== 00:22:58.514 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:58.514 TCP transport: 00:22:58.514 polls: 26650 00:22:58.514 idle_polls: 9492 00:22:58.514 sock_completions: 17158 00:22:58.514 nvme_completions: 4393 00:22:58.514 submitted_requests: 6632 00:22:58.514 queued_requests: 1 00:22:58.514 00:22:58.514 ==================== 00:22:58.514 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:58.514 TCP transport: 00:22:58.514 polls: 24525 00:22:58.514 idle_polls: 8489 00:22:58.514 sock_completions: 16036 00:22:58.514 nvme_completions: 4227 00:22:58.514 submitted_requests: 6350 00:22:58.514 queued_requests: 1 00:22:58.514 ======================================================== 00:22:58.514 Latency(us) 00:22:58.514 Device Information : IOPS MiB/s Average min max 00:22:58.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1096.03 274.01 121595.29 64310.72 217060.91 00:22:58.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1054.61 263.65 122801.93 56001.56 183447.06 00:22:58.514 ======================================================== 00:22:58.514 Total : 2150.64 537.66 122186.99 56001.56 217060.91 00:22:58.514 00:22:58.514 16:04:25 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:58.514 16:04:25 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:58.774 rmmod nvme_tcp 00:22:58.774 rmmod nvme_fabrics 00:22:58.774 rmmod nvme_keyring 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1213331 ']' 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1213331 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1213331 ']' 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1213331 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1213331 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1213331' 00:22:58.774 killing process with pid 1213331 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1213331 00:22:58.774 16:04:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1213331 00:23:00.697 16:04:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:00.697 16:04:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:00.697 16:04:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:00.697 16:04:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.697 16:04:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.697 16:04:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.697 16:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.697 16:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.633 16:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:02.633 00:23:02.633 real 0m21.634s 00:23:02.633 user 1m8.249s 00:23:02.633 sys 0m4.839s 00:23:02.633 16:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:02.633 16:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:02.633 ************************************ 00:23:02.633 END TEST nvmf_perf 00:23:02.633 ************************************ 00:23:02.633 16:04:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:02.633 16:04:29 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:02.633 16:04:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:02.633 16:04:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:02.633 16:04:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:02.633 ************************************ 00:23:02.633 START TEST nvmf_fio_host 00:23:02.633 ************************************ 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:02.633 * Looking for test storage... 00:23:02.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.633 16:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:04.536 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:04.536 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:04.536 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:04.536 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.536 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:04.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:23:04.537 00:23:04.537 --- 10.0.0.2 ping statistics --- 00:23:04.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.537 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:23:04.537 00:23:04.537 --- 10.0.0.1 ping statistics --- 00:23:04.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.537 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1217302 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1217302 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1217302 ']' 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.537 16:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.796 [2024-07-15 16:04:31.486979] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:04.796 [2024-07-15 16:04:31.487069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.796 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.796 [2024-07-15 16:04:31.556312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:04.796 [2024-07-15 16:04:31.674030] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.796 [2024-07-15 16:04:31.674087] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.796 [2024-07-15 16:04:31.674101] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.796 [2024-07-15 16:04:31.674113] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.796 [2024-07-15 16:04:31.674123] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.796 [2024-07-15 16:04:31.674213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.796 [2024-07-15 16:04:31.674273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.796 [2024-07-15 16:04:31.674332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:04.796 [2024-07-15 16:04:31.674335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.732 16:04:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.732 16:04:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:05.732 16:04:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:05.732 [2024-07-15 16:04:32.653557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.990 16:04:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:05.990 16:04:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:05.990 16:04:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.990 16:04:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:06.246 Malloc1 00:23:06.246 16:04:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:06.503 16:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:06.760 16:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.018 [2024-07-15 16:04:33.698328] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.018 16:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:07.275 16:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:07.275 16:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:07.275 16:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:07.275 16:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:07.275 16:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:07.275 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:07.275 fio-3.35 00:23:07.275 Starting 1 thread 00:23:07.533 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.074 00:23:10.074 test: (groupid=0, jobs=1): err= 0: pid=1217790: Mon Jul 15 16:04:36 2024 00:23:10.074 read: IOPS=8993, BW=35.1MiB/s (36.8MB/s)(70.5MiB/2007msec) 00:23:10.074 slat (usec): min=2, max=146, avg= 2.60, stdev= 1.79 00:23:10.074 clat (usec): min=3323, max=13797, avg=7836.02, stdev=586.63 00:23:10.074 lat (usec): min=3352, max=13800, avg=7838.62, stdev=586.53 00:23:10.074 clat percentiles (usec): 00:23:10.074 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:23:10.074 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:23:10.074 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8455], 95.00th=[ 8717], 00:23:10.074 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11731], 99.95th=[13173], 00:23:10.074 | 99.99th=[13698] 00:23:10.074 bw ( KiB/s): min=34688, max=36648, per=99.96%, avg=35956.00, stdev=877.83, samples=4 00:23:10.074 iops : min= 8672, max= 9162, avg=8989.00, stdev=219.46, samples=4 00:23:10.074 write: IOPS=9013, BW=35.2MiB/s (36.9MB/s)(70.7MiB/2007msec); 0 zone resets 00:23:10.074 slat (usec): min=2, max=139, avg= 2.74, stdev= 1.46 00:23:10.074 clat (usec): min=1407, max=12455, avg=6279.31, stdev=513.77 00:23:10.074 lat (usec): min=1415, max=12457, avg=6282.05, stdev=513.70 00:23:10.074 clat percentiles (usec): 00:23:10.074 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:23:10.074 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6259], 60.00th=[ 6390], 00:23:10.074 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 6980], 00:23:10.074 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[10290], 99.95th=[11600], 00:23:10.074 | 99.99th=[12387] 00:23:10.074 bw ( KiB/s): min=35536, max=36416, per=100.00%, avg=36068.00, stdev=406.11, samples=4 00:23:10.074 iops : min= 8884, max= 9104, avg=9017.00, stdev=101.53, samples=4 00:23:10.074 lat (msec) : 2=0.01%, 4=0.13%, 10=99.70%, 20=0.16% 00:23:10.074 cpu : usr=59.17%, sys=34.90%, ctx=76, majf=0, minf=41 00:23:10.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:10.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:10.074 issued rwts: total=18049,18090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:10.074 00:23:10.074 Run status group 0 (all jobs): 00:23:10.074 READ: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.5MiB (73.9MB), run=2007-2007msec 00:23:10.074 WRITE: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.7MiB (74.1MB), run=2007-2007msec 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:10.074 16:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:10.074 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:10.074 fio-3.35 00:23:10.074 Starting 1 thread 00:23:10.074 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.599 00:23:12.599 test: (groupid=0, jobs=1): err= 0: pid=1218118: Mon Jul 15 16:04:39 2024 00:23:12.599 read: IOPS=8258, BW=129MiB/s (135MB/s)(259MiB/2008msec) 00:23:12.599 slat (usec): min=3, max=108, avg= 3.95, stdev= 1.88 00:23:12.599 clat (usec): min=3345, max=16704, avg=9091.43, stdev=1938.70 00:23:12.599 lat (usec): min=3348, max=16708, avg=9095.38, stdev=1938.74 00:23:12.599 clat percentiles (usec): 00:23:12.599 | 1.00th=[ 5014], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7439], 00:23:12.599 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9634], 00:23:12.599 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11469], 95.00th=[12125], 00:23:12.599 | 99.00th=[14353], 99.50th=[15401], 99.90th=[16450], 99.95th=[16581], 00:23:12.599 | 99.99th=[16712] 00:23:12.599 bw ( KiB/s): min=55104, max=76384, per=51.59%, avg=68176.00, stdev=10025.44, samples=4 00:23:12.599 iops : min= 3444, max= 4774, avg=4261.00, stdev=626.59, samples=4 00:23:12.599 write: IOPS=4877, BW=76.2MiB/s (79.9MB/s)(139MiB/1830msec); 0 zone resets 00:23:12.599 slat (usec): min=30, max=173, avg=34.41, stdev= 5.83 00:23:12.600 clat (usec): min=7151, max=17890, avg=11141.48, stdev=1843.68 00:23:12.600 lat (usec): min=7197, max=17928, avg=11175.89, stdev=1843.69 00:23:12.600 clat percentiles (usec): 00:23:12.600 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9634], 00:23:12.600 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10814], 60.00th=[11207], 00:23:12.600 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13960], 95.00th=[14877], 00:23:12.600 | 99.00th=[15795], 99.50th=[16581], 99.90th=[17433], 99.95th=[17695], 00:23:12.600 | 99.99th=[17957] 00:23:12.600 bw ( KiB/s): min=58272, max=79584, per=90.78%, avg=70840.00, stdev=10056.06, samples=4 00:23:12.600 iops : min= 3642, max= 4974, avg=4427.50, stdev=628.50, samples=4 00:23:12.600 lat (msec) : 4=0.06%, 10=54.46%, 20=45.49% 00:23:12.600 cpu : usr=74.59%, sys=21.82%, ctx=23, majf=0, minf=69 00:23:12.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:12.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:12.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:12.600 issued rwts: total=16584,8925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:12.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:12.600 00:23:12.600 Run status group 0 (all jobs): 00:23:12.600 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2008-2008msec 00:23:12.600 WRITE: bw=76.2MiB/s (79.9MB/s), 76.2MiB/s-76.2MiB/s (79.9MB/s-79.9MB/s), io=139MiB (146MB), run=1830-1830msec 00:23:12.600 16:04:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:12.600 16:04:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:12.600 16:04:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:12.600 16:04:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:12.600 16:04:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:12.600 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:12.600 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:12.600 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:12.600 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:12.600 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:12.600 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:12.600 rmmod nvme_tcp 00:23:12.859 rmmod nvme_fabrics 00:23:12.859 rmmod nvme_keyring 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1217302 ']' 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1217302 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1217302 ']' 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1217302 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1217302 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1217302' 00:23:12.859 killing process with pid 1217302 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1217302 00:23:12.859 16:04:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1217302 00:23:13.118 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:13.118 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:13.118 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:13.118 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.118 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:13.118 16:04:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.118 16:04:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.118 16:04:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.020 16:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:15.020 00:23:15.020 real 0m12.588s 00:23:15.020 user 0m38.125s 00:23:15.020 sys 0m3.905s 00:23:15.020 16:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:15.020 16:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.020 ************************************ 00:23:15.020 END TEST nvmf_fio_host 00:23:15.020 ************************************ 00:23:15.280 16:04:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:15.280 16:04:41 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:15.280 16:04:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:15.280 16:04:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:15.280 16:04:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.280 ************************************ 00:23:15.280 START TEST nvmf_failover 00:23:15.280 ************************************ 00:23:15.280 16:04:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:15.280 * Looking for test storage... 00:23:15.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:15.280 16:04:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:17.183 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:17.183 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:17.183 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:17.183 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.183 16:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.183 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:23:17.184 00:23:17.184 --- 10.0.0.2 ping statistics --- 00:23:17.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.184 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:23:17.184 00:23:17.184 --- 10.0.0.1 ping statistics --- 00:23:17.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.184 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1220364 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1220364 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1220364 ']' 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.184 16:04:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.184 [2024-07-15 16:04:44.088308] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:17.184 [2024-07-15 16:04:44.088385] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.444 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.444 [2024-07-15 16:04:44.158241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:17.444 [2024-07-15 16:04:44.273539] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.444 [2024-07-15 16:04:44.273605] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.444 [2024-07-15 16:04:44.273621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.444 [2024-07-15 16:04:44.273635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.444 [2024-07-15 16:04:44.273647] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.444 [2024-07-15 16:04:44.273740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.444 [2024-07-15 16:04:44.273848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.444 [2024-07-15 16:04:44.273851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.379 16:04:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.379 16:04:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:18.379 16:04:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.379 16:04:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.379 16:04:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:18.379 16:04:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.379 16:04:45 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:18.379 [2024-07-15 16:04:45.272149] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.379 16:04:45 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:18.638 Malloc0 00:23:18.638 16:04:45 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:18.897 16:04:45 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:19.155 16:04:46 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.412 [2024-07-15 16:04:46.300756] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.412 16:04:46 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:19.669 [2024-07-15 16:04:46.549517] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:19.669 16:04:46 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:19.956 [2024-07-15 16:04:46.802429] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:19.956 16:04:46 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1220729 00:23:19.956 16:04:46 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:19.956 16:04:46 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:19.956 16:04:46 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1220729 /var/tmp/bdevperf.sock 00:23:19.956 16:04:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1220729 ']' 00:23:19.956 16:04:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.956 16:04:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.956 16:04:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.956 16:04:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.956 16:04:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:20.525 16:04:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.525 16:04:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:20.525 16:04:47 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.783 NVMe0n1 00:23:20.783 16:04:47 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:21.041 00:23:21.041 16:04:47 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1220859 00:23:21.041 16:04:47 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:21.041 16:04:47 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:21.975 16:04:48 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.235 [2024-07-15 16:04:49.082897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.235 [2024-07-15 16:04:49.083029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.235 [2024-07-15 16:04:49.083052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.235 [2024-07-15 16:04:49.083065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.235 [2024-07-15 16:04:49.083077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.235 [2024-07-15 16:04:49.083090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.235 [2024-07-15 16:04:49.083101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.235 [2024-07-15 16:04:49.083114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.236 [2024-07-15 16:04:49.083126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.236 [2024-07-15 16:04:49.083146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.236 [2024-07-15 16:04:49.083162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.236 [2024-07-15 16:04:49.083190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.236 [2024-07-15 16:04:49.083202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84070 is same with the state(5) to be set 00:23:22.236 16:04:49 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:25.525 16:04:52 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.525 00:23:25.785 16:04:52 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:25.785 [2024-07-15 16:04:52.700399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.785 [2024-07-15 16:04:52.700465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.785 [2024-07-15 16:04:52.700490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.785 [2024-07-15 16:04:52.700502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.785 [2024-07-15 16:04:52.700514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.785 [2024-07-15 16:04:52.700526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.785 [2024-07-15 16:04:52.700538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.785 [2024-07-15 16:04:52.700549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.785 [2024-07-15 16:04:52.700561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.785 [2024-07-15 16:04:52.700573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:25.786 [2024-07-15 16:04:52.700883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85620 is same with the state(5) to be set 00:23:26.046 16:04:52 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:29.332 16:04:55 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.332 [2024-07-15 16:04:55.957350] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.332 16:04:55 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:30.268 16:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:30.526 [2024-07-15 16:04:57.229303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 [2024-07-15 16:04:57.229616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85e30 is same with the state(5) to be set 00:23:30.526 16:04:57 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1220859 00:23:37.095 0 00:23:37.095 16:05:02 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1220729 00:23:37.096 16:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1220729 ']' 00:23:37.096 16:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1220729 00:23:37.096 16:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:37.096 16:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:37.096 16:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1220729 00:23:37.096 16:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:37.096 16:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:37.096 16:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1220729' 00:23:37.096 killing process with pid 1220729 00:23:37.096 16:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1220729 00:23:37.096 16:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1220729 00:23:37.096 16:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:37.096 [2024-07-15 16:04:46.864015] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:37.096 [2024-07-15 16:04:46.864107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220729 ] 00:23:37.096 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.096 [2024-07-15 16:04:46.924900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.096 [2024-07-15 16:04:47.034507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.096 Running I/O for 15 seconds... 00:23:37.096 [2024-07-15 16:04:49.083716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.096 [2024-07-15 16:04:49.083759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.083791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.096 [2024-07-15 16:04:49.083809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.083825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.096 [2024-07-15 16:04:49.083840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.083856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.096 [2024-07-15 16:04:49.083871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.083897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.096 [2024-07-15 16:04:49.083924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.083940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.096 [2024-07-15 16:04:49.083954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.083970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.096 [2024-07-15 16:04:49.083984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.096 [2024-07-15 16:04:49.084874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.096 [2024-07-15 16:04:49.084899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.084930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.084945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.084961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.084976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.084992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.085698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.097 [2024-07-15 16:04:49.085731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.097 [2024-07-15 16:04:49.085761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.097 [2024-07-15 16:04:49.085790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.097 [2024-07-15 16:04:49.085819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.097 [2024-07-15 16:04:49.085848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.097 [2024-07-15 16:04:49.085899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.097 [2024-07-15 16:04:49.085942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.097 [2024-07-15 16:04:49.085972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.085988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.086003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.086019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.086033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.086048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.086064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.086080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.086095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.086110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.086125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.086144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.086161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.086176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.086191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.086222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.097 [2024-07-15 16:04:49.086236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.097 [2024-07-15 16:04:49.086251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.098 [2024-07-15 16:04:49.086852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.086905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.086941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.086977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.086994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.098 [2024-07-15 16:04:49.087549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.098 [2024-07-15 16:04:49.087563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.087578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:49.087592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.087607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:49.087621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.087636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:49.087650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.087665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:49.087679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.087693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:49.087711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.087726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:49.087740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.087755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:49.087769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.087798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.099 [2024-07-15 16:04:49.087814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.099 [2024-07-15 16:04:49.087826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80280 len:8 PRP1 0x0 PRP2 0x0 00:23:37.099 [2024-07-15 16:04:49.087838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.087922] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8e5390 was disconnected and freed. reset controller. 00:23:37.099 [2024-07-15 16:04:49.087943] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:37.099 [2024-07-15 16:04:49.087978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.099 [2024-07-15 16:04:49.087997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.088012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.099 [2024-07-15 16:04:49.088026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.088040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.099 [2024-07-15 16:04:49.088056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.088071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.099 [2024-07-15 16:04:49.088084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:49.088097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.099 [2024-07-15 16:04:49.088158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bf0f0 (9): Bad file descriptor 00:23:37.099 [2024-07-15 16:04:49.091414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.099 [2024-07-15 16:04:49.207253] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:37.099 [2024-07-15 16:04:52.701233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.701970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.701986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.099 [2024-07-15 16:04:52.702000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.099 [2024-07-15 16:04:52.702015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.100 [2024-07-15 16:04:52.702734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.100 [2024-07-15 16:04:52.702971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.100 [2024-07-15 16:04:52.702986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.101 [2024-07-15 16:04:52.703002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.703982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.703996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.704010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.704029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.704044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.704058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.704073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.704088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.704102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.704117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.704132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.704147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.704162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.704177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.704191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.704220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.101 [2024-07-15 16:04:52.704235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.101 [2024-07-15 16:04:52.704249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.102 [2024-07-15 16:04:52.704263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.102 [2024-07-15 16:04:52.704291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.102 [2024-07-15 16:04:52.704319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.102 [2024-07-15 16:04:52.704347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.102 [2024-07-15 16:04:52.704377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.102 [2024-07-15 16:04:52.704409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.102 [2024-07-15 16:04:52.704438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.704487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102384 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.704500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.704531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.704541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102392 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.704554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.704578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.704589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102400 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.704602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.704624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.704635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102408 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.704648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.704672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.704682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102416 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.704695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.704718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.704728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102424 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.704740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.704762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.704772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102432 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.704784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.704811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.704822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102440 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.704833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.704856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.704867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102448 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.704902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.704928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.704939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102456 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.704951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.704964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.704975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.704985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102464 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.704998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.705011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.705022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.705033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102472 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.705045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.705058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.705068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.705079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102480 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.705092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.705105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.705115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.705126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102488 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.705138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.705151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.705162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.705173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102496 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.705189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.705202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.705213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.705224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102504 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.705236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.705249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.705259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.705270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102512 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.705282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.705296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.705307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.705317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102520 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.705330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.705342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.705353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.705364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102528 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.705383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.705396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.705407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.705418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102536 len:8 PRP1 0x0 PRP2 0x0 00:23:37.102 [2024-07-15 16:04:52.705430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.102 [2024-07-15 16:04:52.705443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.102 [2024-07-15 16:04:52.705453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.102 [2024-07-15 16:04:52.705464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101840 len:8 PRP1 0x0 PRP2 0x0 00:23:37.103 [2024-07-15 16:04:52.705477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:52.705490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.103 [2024-07-15 16:04:52.705500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.103 [2024-07-15 16:04:52.705511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101848 len:8 PRP1 0x0 PRP2 0x0 00:23:37.103 [2024-07-15 16:04:52.705523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:52.705535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.103 [2024-07-15 16:04:52.705546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.103 [2024-07-15 16:04:52.705560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101856 len:8 PRP1 0x0 PRP2 0x0 00:23:37.103 [2024-07-15 16:04:52.705573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:52.705586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.103 [2024-07-15 16:04:52.705597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.103 [2024-07-15 16:04:52.705608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101864 len:8 PRP1 0x0 PRP2 0x0 00:23:37.103 [2024-07-15 16:04:52.705620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:52.705688] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa89d80 was disconnected and freed. reset controller. 00:23:37.103 [2024-07-15 16:04:52.705707] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:37.103 [2024-07-15 16:04:52.705757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.103 [2024-07-15 16:04:52.705777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:52.705793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.103 [2024-07-15 16:04:52.705813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:52.705828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.103 [2024-07-15 16:04:52.705842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:52.705857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.103 [2024-07-15 16:04:52.705870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:52.705890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.103 [2024-07-15 16:04:52.705940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bf0f0 (9): Bad file descriptor 00:23:37.103 [2024-07-15 16:04:52.709208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.103 [2024-07-15 16:04:52.898970] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:37.103 [2024-07-15 16:04:57.229807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.229848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.229888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.229906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.229924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.229939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.229955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.229990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.103 [2024-07-15 16:04:57.230736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.103 [2024-07-15 16:04:57.230750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.230764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.104 [2024-07-15 16:04:57.230779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.230793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.104 [2024-07-15 16:04:57.230806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.230821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.104 [2024-07-15 16:04:57.230834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.230849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.230862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.230899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.230915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.230932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.230946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.230961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.230974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.230989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.104 [2024-07-15 16:04:57.231323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.104 [2024-07-15 16:04:57.231635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.104 [2024-07-15 16:04:57.231649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.231663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.231678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.231692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.231706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.231721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.231735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.231750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.231764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.231779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.231792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.231808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.231822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.231837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.231850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.231893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.231909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.231924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.231939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.231954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.231968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.231983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.231998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.105 [2024-07-15 16:04:57.232731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.105 [2024-07-15 16:04:57.232745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.232760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.232774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.232789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.232803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.232818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.232832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.232847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.232923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.232946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.232961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.232976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.232990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.106 [2024-07-15 16:04:57.233373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.106 [2024-07-15 16:04:57.233403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.106 [2024-07-15 16:04:57.233431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.106 [2024-07-15 16:04:57.233459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.106 [2024-07-15 16:04:57.233493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.106 [2024-07-15 16:04:57.233522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.106 [2024-07-15 16:04:57.233551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.106 [2024-07-15 16:04:57.233753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa89b70 is same with the state(5) to be set 00:23:37.106 [2024-07-15 16:04:57.233786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.106 [2024-07-15 16:04:57.233797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.106 [2024-07-15 16:04:57.233809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56904 len:8 PRP1 0x0 PRP2 0x0 00:23:37.106 [2024-07-15 16:04:57.233821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233904] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa89b70 was disconnected and freed. reset controller. 00:23:37.106 [2024-07-15 16:04:57.233929] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:37.106 [2024-07-15 16:04:57.233964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.106 [2024-07-15 16:04:57.233983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.233999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.106 [2024-07-15 16:04:57.234012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.234026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.106 [2024-07-15 16:04:57.234040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.234053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.106 [2024-07-15 16:04:57.234066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.106 [2024-07-15 16:04:57.234079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.106 [2024-07-15 16:04:57.237374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.106 [2024-07-15 16:04:57.237426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bf0f0 (9): Bad file descriptor 00:23:37.106 [2024-07-15 16:04:57.272987] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:37.106 00:23:37.106 Latency(us) 00:23:37.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.106 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:37.106 Verification LBA range: start 0x0 length 0x4000 00:23:37.107 NVMe0n1 : 15.00 8448.23 33.00 891.25 0.00 13677.85 794.93 15437.37 00:23:37.107 =================================================================================================================== 00:23:37.107 Total : 8448.23 33.00 891.25 0.00 13677.85 794.93 15437.37 00:23:37.107 Received shutdown signal, test time was about 15.000000 seconds 00:23:37.107 00:23:37.107 Latency(us) 00:23:37.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.107 =================================================================================================================== 00:23:37.107 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1222587 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1222587 /var/tmp/bdevperf.sock 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1222587 ']' 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:37.107 [2024-07-15 16:05:03.865070] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:37.107 16:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:37.364 [2024-07-15 16:05:04.129774] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:37.364 16:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.929 NVMe0n1 00:23:37.929 16:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.187 00:23:38.187 16:05:05 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.445 00:23:38.445 16:05:05 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.703 16:05:05 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:38.703 16:05:05 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.962 16:05:05 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:42.255 16:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.255 16:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:42.255 16:05:09 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1223386 00:23:42.255 16:05:09 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.255 16:05:09 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1223386 00:23:43.661 0 00:23:43.661 16:05:10 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:43.661 [2024-07-15 16:05:03.303897] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:43.661 [2024-07-15 16:05:03.303984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222587 ] 00:23:43.661 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.661 [2024-07-15 16:05:03.368002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.661 [2024-07-15 16:05:03.474185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.661 [2024-07-15 16:05:05.855405] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:43.661 [2024-07-15 16:05:05.855500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.661 [2024-07-15 16:05:05.855522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.661 [2024-07-15 16:05:05.855538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.661 [2024-07-15 16:05:05.855567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.661 [2024-07-15 16:05:05.855581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.661 [2024-07-15 16:05:05.855594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.661 [2024-07-15 16:05:05.855608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.661 [2024-07-15 16:05:05.855622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.661 [2024-07-15 16:05:05.855635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:43.661 [2024-07-15 16:05:05.855681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:43.661 [2024-07-15 16:05:05.855713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80d0f0 (9): Bad file descriptor 00:23:43.661 [2024-07-15 16:05:05.903535] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:43.661 Running I/O for 1 seconds... 00:23:43.661 00:23:43.661 Latency(us) 00:23:43.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.661 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:43.661 Verification LBA range: start 0x0 length 0x4000 00:23:43.661 NVMe0n1 : 1.01 8199.52 32.03 0.00 0.00 15547.32 1492.76 13495.56 00:23:43.661 =================================================================================================================== 00:23:43.661 Total : 8199.52 32.03 0.00 0.00 15547.32 1492.76 13495.56 00:23:43.662 16:05:10 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.662 16:05:10 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:43.662 16:05:10 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:43.919 16:05:10 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.919 16:05:10 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:44.177 16:05:11 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:44.434 16:05:11 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1222587 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1222587 ']' 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1222587 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1222587 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1222587' 00:23:47.717 killing process with pid 1222587 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1222587 00:23:47.717 16:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1222587 00:23:47.975 16:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:47.975 16:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.541 rmmod nvme_tcp 00:23:48.541 rmmod nvme_fabrics 00:23:48.541 rmmod nvme_keyring 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1220364 ']' 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1220364 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1220364 ']' 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1220364 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1220364 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1220364' 00:23:48.541 killing process with pid 1220364 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1220364 00:23:48.541 16:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1220364 00:23:48.798 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.798 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.798 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.798 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.798 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.798 16:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.798 16:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.798 16:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.706 16:05:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.706 00:23:50.706 real 0m35.588s 00:23:50.706 user 2m5.822s 00:23:50.706 sys 0m5.682s 00:23:50.706 16:05:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.706 16:05:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:50.706 ************************************ 00:23:50.706 END TEST nvmf_failover 00:23:50.706 ************************************ 00:23:50.706 16:05:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:50.706 16:05:17 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:50.706 16:05:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:50.706 16:05:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.706 16:05:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.707 ************************************ 00:23:50.707 START TEST nvmf_host_discovery 00:23:50.707 ************************************ 00:23:50.707 16:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:50.966 * Looking for test storage... 00:23:50.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.966 16:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:52.870 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:52.870 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.870 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:52.871 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:52.871 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:52.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:23:52.871 00:23:52.871 --- 10.0.0.2 ping statistics --- 00:23:52.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.871 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:23:52.871 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:23:53.130 00:23:53.130 --- 10.0.0.1 ping statistics --- 00:23:53.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.130 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1225994 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1225994 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1225994 ']' 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.130 16:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.130 [2024-07-15 16:05:19.883277] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:53.130 [2024-07-15 16:05:19.883359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.130 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.130 [2024-07-15 16:05:19.947352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.130 [2024-07-15 16:05:20.057779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.130 [2024-07-15 16:05:20.057835] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.130 [2024-07-15 16:05:20.057848] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.130 [2024-07-15 16:05:20.057859] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.130 [2024-07-15 16:05:20.057868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.130 [2024-07-15 16:05:20.057904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.388 [2024-07-15 16:05:20.207131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.388 [2024-07-15 16:05:20.215316] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.388 null0 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.388 null1 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1226024 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1226024 /tmp/host.sock 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1226024 ']' 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:53.388 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.388 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.388 [2024-07-15 16:05:20.290440] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:53.388 [2024-07-15 16:05:20.290509] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226024 ] 00:23:53.647 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.647 [2024-07-15 16:05:20.354787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.647 [2024-07-15 16:05:20.471122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.905 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.906 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.165 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:54.165 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:54.165 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.165 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.165 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.165 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.166 [2024-07-15 16:05:20.885120] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.166 16:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:54.166 16:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:54.736 [2024-07-15 16:05:21.625138] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:54.736 [2024-07-15 16:05:21.625204] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:54.736 [2024-07-15 16:05:21.625229] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:54.996 [2024-07-15 16:05:21.711495] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:54.996 [2024-07-15 16:05:21.897923] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:54.996 [2024-07-15 16:05:21.897948] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:55.255 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:55.512 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.513 [2024-07-15 16:05:22.345346] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:55.513 [2024-07-15 16:05:22.345774] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:55.513 [2024-07-15 16:05:22.345809] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:55.513 16:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:55.772 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.772 [2024-07-15 16:05:22.473686] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:55.772 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:55.772 16:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:56.032 [2024-07-15 16:05:22.785198] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:56.032 [2024-07-15 16:05:22.785246] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:56.032 [2024-07-15 16:05:22.785256] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.600 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.859 [2024-07-15 16:05:23.569639] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:56.859 [2024-07-15 16:05:23.569685] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:56.859 [2024-07-15 16:05:23.579287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.859 [2024-07-15 16:05:23.579334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.859 [2024-07-15 16:05:23.579355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.859 [2024-07-15 16:05:23.579370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.859 [2024-07-15 16:05:23.579386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.859 [2024-07-15 16:05:23.579415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.859 [2024-07-15 16:05:23.579430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.859 [2024-07-15 16:05:23.579444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.859 [2024-07-15 16:05:23.579457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602c00 is same with the state(5) to be set 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.859 [2024-07-15 16:05:23.589284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1602c00 (9): Bad file descriptor 00:23:56.859 [2024-07-15 16:05:23.599332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:56.859 [2024-07-15 16:05:23.599611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.859 [2024-07-15 16:05:23.599642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1602c00 with addr=10.0.0.2, port=4420 00:23:56.859 [2024-07-15 16:05:23.599659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602c00 is same with the state(5) to be set 00:23:56.859 [2024-07-15 16:05:23.599682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1602c00 (9): Bad file descriptor 00:23:56.859 [2024-07-15 16:05:23.599731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:56.859 [2024-07-15 16:05:23.599751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:56.859 [2024-07-15 16:05:23.599768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:56.859 [2024-07-15 16:05:23.599789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:56.859 [2024-07-15 16:05:23.609418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:56.859 [2024-07-15 16:05:23.609654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.859 [2024-07-15 16:05:23.609685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1602c00 with addr=10.0.0.2, port=4420 00:23:56.859 [2024-07-15 16:05:23.609703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602c00 is same with the state(5) to be set 00:23:56.859 [2024-07-15 16:05:23.609727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1602c00 (9): Bad file descriptor 00:23:56.859 [2024-07-15 16:05:23.609763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:56.859 [2024-07-15 16:05:23.609784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:56.859 [2024-07-15 16:05:23.609799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:56.859 [2024-07-15 16:05:23.609820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:56.859 [2024-07-15 16:05:23.619512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.859 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.859 [2024-07-15 16:05:23.620576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.859 [2024-07-15 16:05:23.620606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1602c00 with addr=10.0.0.2, port=4420 00:23:56.859 [2024-07-15 16:05:23.620622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602c00 is same with the state(5) to be set 00:23:56.859 [2024-07-15 16:05:23.620649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1602c00 (9): Bad file descriptor 00:23:56.859 [2024-07-15 16:05:23.620682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:56.859 [2024-07-15 16:05:23.620714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:56.859 [2024-07-15 16:05:23.620727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:56.859 [2024-07-15 16:05:23.620746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:56.859 [2024-07-15 16:05:23.629590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:56.859 [2024-07-15 16:05:23.629826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.859 [2024-07-15 16:05:23.629859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1602c00 with addr=10.0.0.2, port=4420 00:23:56.859 [2024-07-15 16:05:23.629885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602c00 is same with the state(5) to be set 00:23:56.859 [2024-07-15 16:05:23.629912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1602c00 (9): Bad file descriptor 00:23:56.859 [2024-07-15 16:05:23.629951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:56.859 [2024-07-15 16:05:23.629966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:56.859 [2024-07-15 16:05:23.629979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:56.859 [2024-07-15 16:05:23.629998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:56.859 [2024-07-15 16:05:23.639672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:56.859 [2024-07-15 16:05:23.639893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.859 [2024-07-15 16:05:23.639938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1602c00 with addr=10.0.0.2, port=4420 00:23:56.859 [2024-07-15 16:05:23.639960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602c00 is same with the state(5) to be set 00:23:56.859 [2024-07-15 16:05:23.639983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1602c00 (9): Bad file descriptor 00:23:56.859 [2024-07-15 16:05:23.640004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:56.859 [2024-07-15 16:05:23.640018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:56.859 [2024-07-15 16:05:23.640032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:56.860 [2024-07-15 16:05:23.640066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.860 [2024-07-15 16:05:23.649748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:56.860 [2024-07-15 16:05:23.649988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.860 [2024-07-15 16:05:23.650016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1602c00 with addr=10.0.0.2, port=4420 00:23:56.860 [2024-07-15 16:05:23.650032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602c00 is same with the state(5) to be set 00:23:56.860 [2024-07-15 16:05:23.650053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1602c00 (9): Bad file descriptor 00:23:56.860 [2024-07-15 16:05:23.650073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:56.860 [2024-07-15 16:05:23.650087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:56.860 [2024-07-15 16:05:23.650100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:56.860 [2024-07-15 16:05:23.650118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:56.860 [2024-07-15 16:05:23.656266] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:56.860 [2024-07-15 16:05:23.656297] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:56.860 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.121 16:05:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.060 [2024-07-15 16:05:24.893682] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:58.060 [2024-07-15 16:05:24.893710] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:58.060 [2024-07-15 16:05:24.893737] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:58.060 [2024-07-15 16:05:24.981107] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:58.319 [2024-07-15 16:05:25.088446] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:58.319 [2024-07-15 16:05:25.088491] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.319 request: 00:23:58.319 { 00:23:58.319 "name": "nvme", 00:23:58.319 "trtype": "tcp", 00:23:58.319 "traddr": "10.0.0.2", 00:23:58.319 "adrfam": "ipv4", 00:23:58.319 "trsvcid": "8009", 00:23:58.319 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:58.319 "wait_for_attach": true, 00:23:58.319 "method": "bdev_nvme_start_discovery", 00:23:58.319 "req_id": 1 00:23:58.319 } 00:23:58.319 Got JSON-RPC error response 00:23:58.319 response: 00:23:58.319 { 00:23:58.319 "code": -17, 00:23:58.319 "message": "File exists" 00:23:58.319 } 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:58.319 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.320 request: 00:23:58.320 { 00:23:58.320 "name": "nvme_second", 00:23:58.320 "trtype": "tcp", 00:23:58.320 "traddr": "10.0.0.2", 00:23:58.320 "adrfam": "ipv4", 00:23:58.320 "trsvcid": "8009", 00:23:58.320 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:58.320 "wait_for_attach": true, 00:23:58.320 "method": "bdev_nvme_start_discovery", 00:23:58.320 "req_id": 1 00:23:58.320 } 00:23:58.320 Got JSON-RPC error response 00:23:58.320 response: 00:23:58.320 { 00:23:58.320 "code": -17, 00:23:58.320 "message": "File exists" 00:23:58.320 } 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:58.320 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:58.580 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.580 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:58.580 16:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:58.580 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:58.580 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:58.580 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:58.580 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.580 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:58.580 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.580 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:58.580 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.580 16:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.521 [2024-07-15 16:05:26.296635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.521 [2024-07-15 16:05:26.296684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161dc90 with addr=10.0.0.2, port=8010 00:23:59.521 [2024-07-15 16:05:26.296710] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:59.521 [2024-07-15 16:05:26.296725] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:59.521 [2024-07-15 16:05:26.296737] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:00.463 [2024-07-15 16:05:27.299480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.463 [2024-07-15 16:05:27.299561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161dc90 with addr=10.0.0.2, port=8010 00:24:00.463 [2024-07-15 16:05:27.299598] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:00.463 [2024-07-15 16:05:27.299624] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:00.463 [2024-07-15 16:05:27.299639] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:01.401 [2024-07-15 16:05:28.301288] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:01.401 request: 00:24:01.401 { 00:24:01.401 "name": "nvme_second", 00:24:01.401 "trtype": "tcp", 00:24:01.401 "traddr": "10.0.0.2", 00:24:01.401 "adrfam": "ipv4", 00:24:01.401 "trsvcid": "8010", 00:24:01.401 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:01.401 "wait_for_attach": false, 00:24:01.401 "attach_timeout_ms": 3000, 00:24:01.401 "method": "bdev_nvme_start_discovery", 00:24:01.401 "req_id": 1 00:24:01.401 } 00:24:01.401 Got JSON-RPC error response 00:24:01.401 response: 00:24:01.401 { 00:24:01.401 "code": -110, 00:24:01.401 "message": "Connection timed out" 00:24:01.401 } 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:01.401 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1226024 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.662 rmmod nvme_tcp 00:24:01.662 rmmod nvme_fabrics 00:24:01.662 rmmod nvme_keyring 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1225994 ']' 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1225994 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1225994 ']' 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1225994 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1225994 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1225994' 00:24:01.662 killing process with pid 1225994 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1225994 00:24:01.662 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1225994 00:24:01.922 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.922 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:01.922 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:01.922 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.922 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.922 16:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.922 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.922 16:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.831 16:05:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:03.831 00:24:03.831 real 0m13.136s 00:24:03.831 user 0m19.051s 00:24:03.831 sys 0m2.728s 00:24:03.831 16:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:03.831 16:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.831 ************************************ 00:24:03.831 END TEST nvmf_host_discovery 00:24:03.831 ************************************ 00:24:04.090 16:05:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:04.090 16:05:30 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:04.090 16:05:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:04.090 16:05:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.090 16:05:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.090 ************************************ 00:24:04.090 START TEST nvmf_host_multipath_status 00:24:04.090 ************************************ 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:04.090 * Looking for test storage... 00:24:04.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.090 16:05:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:05.997 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.997 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:05.997 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:05.997 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:05.997 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:05.997 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:05.997 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:05.997 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:05.997 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:05.997 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:05.997 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:05.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:05.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:05.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:05.998 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:05.998 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:06.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:24:06.260 00:24:06.260 --- 10.0.0.2 ping statistics --- 00:24:06.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.260 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:24:06.260 00:24:06.260 --- 10.0.0.1 ping statistics --- 00:24:06.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.260 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1229164 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1229164 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1229164 ']' 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.260 16:05:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:06.260 [2024-07-15 16:05:33.043550] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:24:06.260 [2024-07-15 16:05:33.043636] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.260 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.260 [2024-07-15 16:05:33.109748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:06.519 [2024-07-15 16:05:33.219683] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.519 [2024-07-15 16:05:33.219741] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.519 [2024-07-15 16:05:33.219753] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.519 [2024-07-15 16:05:33.219764] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.519 [2024-07-15 16:05:33.219773] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.519 [2024-07-15 16:05:33.219860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.519 [2024-07-15 16:05:33.219864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.519 16:05:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.519 16:05:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:06.519 16:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:06.519 16:05:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:06.519 16:05:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:06.519 16:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.519 16:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1229164 00:24:06.519 16:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:06.777 [2024-07-15 16:05:33.642952] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.777 16:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:07.035 Malloc0 00:24:07.035 16:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:07.294 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:07.553 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:07.811 [2024-07-15 16:05:34.678567] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.811 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:08.069 [2024-07-15 16:05:34.923182] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:08.069 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1229330 00:24:08.069 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:08.069 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:08.069 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1229330 /var/tmp/bdevperf.sock 00:24:08.069 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1229330 ']' 00:24:08.069 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.069 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.069 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.069 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.069 16:05:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:08.635 16:05:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.635 16:05:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:08.635 16:05:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:08.635 16:05:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:09.203 Nvme0n1 00:24:09.203 16:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:09.771 Nvme0n1 00:24:09.771 16:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:09.771 16:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:11.677 16:05:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:11.677 16:05:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:11.935 16:05:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:12.502 16:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:13.436 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:13.436 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.436 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.436 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.694 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.694 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:13.694 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.694 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.952 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.952 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.952 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.952 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.952 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.952 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.210 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.210 16:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.469 16:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.469 16:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:14.469 16:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.469 16:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.469 16:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.469 16:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:14.469 16:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.469 16:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.727 16:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.727 16:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:14.727 16:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:14.985 16:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:15.243 16:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:16.658 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:16.658 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:16.658 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.658 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.658 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.658 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.658 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.658 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.921 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.921 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.921 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.921 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:17.178 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.178 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:17.178 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.178 16:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.434 16:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.434 16:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:17.434 16:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.434 16:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.692 16:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.692 16:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.692 16:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.692 16:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:18.005 16:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.005 16:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:18.005 16:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:18.263 16:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:18.521 16:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:19.456 16:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:19.456 16:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:19.456 16:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.456 16:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:19.713 16:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.713 16:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:19.713 16:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.713 16:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:19.971 16:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.971 16:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:19.971 16:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.971 16:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:20.228 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.228 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:20.228 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.228 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:20.485 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.485 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:20.485 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.485 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.743 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.743 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:20.743 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.743 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:21.000 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.000 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:21.000 16:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:21.258 16:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:21.518 16:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:22.452 16:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:22.452 16:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:22.452 16:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.452 16:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:22.709 16:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.709 16:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:22.709 16:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.709 16:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:22.967 16:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.967 16:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:22.967 16:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.967 16:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:23.225 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.225 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:23.225 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.225 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:23.482 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.482 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:23.482 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.482 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:23.739 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.740 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:23.740 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.740 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:23.998 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:23.998 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:23.998 16:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:24.255 16:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:24.514 16:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:25.452 16:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:25.452 16:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:25.452 16:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.452 16:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:25.710 16:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.710 16:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:25.710 16:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.710 16:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:25.966 16:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.966 16:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:25.966 16:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.966 16:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:26.222 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.222 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:26.222 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.222 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:26.479 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.479 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:26.479 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.479 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:26.734 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.734 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:26.734 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.734 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:26.991 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.991 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:26.991 16:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:27.248 16:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:27.506 16:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:28.438 16:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:28.438 16:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:28.438 16:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.438 16:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:28.695 16:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:28.695 16:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:28.695 16:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.695 16:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:28.952 16:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.952 16:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:28.952 16:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.952 16:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:29.209 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.209 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:29.209 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.209 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:29.466 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.466 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:29.466 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.466 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:29.723 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.723 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:29.723 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.723 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:29.987 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.987 16:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:30.243 16:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:30.243 16:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:30.522 16:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:30.836 16:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:31.769 16:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:31.770 16:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:31.770 16:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.770 16:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.027 16:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.027 16:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:32.027 16:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.027 16:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.284 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.284 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.284 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.284 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.542 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.542 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:32.542 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.542 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:32.801 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.801 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:32.801 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.801 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:33.059 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.059 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:33.059 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.059 16:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:33.317 16:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.317 16:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:33.317 16:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:33.575 16:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:33.833 16:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:34.767 16:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:34.767 16:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:34.767 16:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.767 16:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:35.025 16:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:35.025 16:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:35.025 16:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.025 16:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:35.283 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.283 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:35.283 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.283 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:35.541 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.541 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:35.541 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.541 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:35.798 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.799 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:35.799 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.799 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:36.056 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.056 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:36.056 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.056 16:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:36.314 16:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.314 16:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:36.314 16:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:36.570 16:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:36.829 16:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:38.204 16:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:38.204 16:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:38.204 16:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.204 16:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:38.204 16:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.204 16:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:38.204 16:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.204 16:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:38.473 16:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.473 16:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:38.473 16:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.473 16:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:38.731 16:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.731 16:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:38.731 16:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.731 16:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:38.989 16:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.989 16:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:38.989 16:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.989 16:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:39.248 16:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.248 16:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:39.248 16:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.248 16:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:39.506 16:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.506 16:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:39.506 16:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:39.764 16:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:40.022 16:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:40.958 16:06:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:40.958 16:06:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:40.958 16:06:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.958 16:06:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:41.216 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.216 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:41.216 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.216 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:41.474 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.474 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:41.474 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.474 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:41.733 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.733 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:41.733 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.733 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:41.991 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.991 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:41.991 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.991 16:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:42.248 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.248 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:42.248 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.248 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:42.507 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:42.507 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1229330 00:24:42.507 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1229330 ']' 00:24:42.507 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1229330 00:24:42.507 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:42.507 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:42.507 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1229330 00:24:42.507 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:42.507 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:42.507 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1229330' 00:24:42.507 killing process with pid 1229330 00:24:42.507 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1229330 00:24:42.507 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1229330 00:24:42.779 Connection closed with partial response: 00:24:42.779 00:24:42.779 00:24:42.779 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1229330 00:24:42.779 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:42.779 [2024-07-15 16:05:34.985179] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:24:42.779 [2024-07-15 16:05:34.985276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229330 ] 00:24:42.779 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.779 [2024-07-15 16:05:35.046293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.779 [2024-07-15 16:05:35.161366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.779 Running I/O for 90 seconds... 00:24:42.779 [2024-07-15 16:05:51.062797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.779 [2024-07-15 16:05:51.062860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.779 [2024-07-15 16:05:51.063122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.779 [2024-07-15 16:05:51.063145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.779 [2024-07-15 16:05:51.063186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.779 [2024-07-15 16:05:51.063204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.063966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.063987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.064975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.064992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.065014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.065030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.065053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.065069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.065091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.065107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.065129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.065145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.065167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.065187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.065209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.065225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.065247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.065263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.065285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.780 [2024-07-15 16:05:51.065301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.780 [2024-07-15 16:05:51.065323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.065354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.065376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.065392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.065431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.065447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.065469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.065490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.065512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.065529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.065551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.065567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.065589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.065605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.065626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.065643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.065665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.065681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.065702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.065718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.065740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.065756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.065794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.065810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.066143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.066191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.066230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.066821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.066859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.066906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.066956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.066978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.066995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.067017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.781 [2024-07-15 16:05:51.067034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.067056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.067071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.781 [2024-07-15 16:05:51.067093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.781 [2024-07-15 16:05:51.067110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.067972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.067989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.782 [2024-07-15 16:05:51.068659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.782 [2024-07-15 16:05:51.068676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.068698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.783 [2024-07-15 16:05:51.068717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.068740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.783 [2024-07-15 16:05:51.068757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.068778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.783 [2024-07-15 16:05:51.068794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.068815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.783 [2024-07-15 16:05:51.068832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.068853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.783 [2024-07-15 16:05:51.068891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.068916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.783 [2024-07-15 16:05:51.068933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.068957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.068974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.068996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.069012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.069034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.783 [2024-07-15 16:05:51.069051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.069072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.069088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.069111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.069127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.069150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.069166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.069190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.069221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.069250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.069267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.069997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.070962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.070984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.071005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.783 [2024-07-15 16:05:51.071028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.783 [2024-07-15 16:05:51.071045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.071971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.071998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.072015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.072600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.072624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.072651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.072669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.072692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.072708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.072731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.784 [2024-07-15 16:05:51.072747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.072769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.784 [2024-07-15 16:05:51.072786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.784 [2024-07-15 16:05:51.072808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.072824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.072846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.072863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.072895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.072914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.072939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.072956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.072977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.072994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.785 [2024-07-15 16:05:51.073384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.785 [2024-07-15 16:05:51.073422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.785 [2024-07-15 16:05:51.073459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.785 [2024-07-15 16:05:51.073496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.785 [2024-07-15 16:05:51.073539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.073963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.073979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.074000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.074021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.074043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.074060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.074082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.074105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.074128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.074145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.074193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.074209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.074247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.074265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.074287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.074304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.074325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.074342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.074364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.074380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.074402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.074418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.074440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.785 [2024-07-15 16:05:51.074457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.785 [2024-07-15 16:05:51.074479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.074495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.074516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.074533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.074559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.074576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.074598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.074614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.074636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.074652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.074674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.074705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.074728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.074744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.074765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.074781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.074803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.074819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.074841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.074872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.074905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.074933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.074955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.074971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.074993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.075530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.075575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.786 [2024-07-15 16:05:51.075613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.075651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.075688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.075726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.075748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.075765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.076530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.076554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.076581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.076600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.076624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.076641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.076664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.076681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.076704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.076730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.076752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.076769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.076791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.076813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.076837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.076853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.786 [2024-07-15 16:05:51.076884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.786 [2024-07-15 16:05:51.076903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.076938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.076955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.076978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.076994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.077972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.077994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.078010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.078032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.078048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.078070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.078086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.078108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.078125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.078147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.078164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.078208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.078224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.078255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.078271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.078292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.078308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.078329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.078349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.078372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.078388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.078409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.078425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.078447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.787 [2024-07-15 16:05:51.078463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.787 [2024-07-15 16:05:51.078484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.788 [2024-07-15 16:05:51.078499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.078521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.788 [2024-07-15 16:05:51.078537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.788 [2024-07-15 16:05:51.079128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.788 [2024-07-15 16:05:51.079176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.788 [2024-07-15 16:05:51.079216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.788 [2024-07-15 16:05:51.079255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.788 [2024-07-15 16:05:51.079295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.788 [2024-07-15 16:05:51.079933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.788 [2024-07-15 16:05:51.079971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.079993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.788 [2024-07-15 16:05:51.080009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.080037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.788 [2024-07-15 16:05:51.080054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.788 [2024-07-15 16:05:51.080077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.788 [2024-07-15 16:05:51.080093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.789 [2024-07-15 16:05:51.080137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.080976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.080992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.081015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.081031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.081052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.081068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.081090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.081106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.081128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.081144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.081166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.081203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.081225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.081241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.081262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.081277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.081298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.081313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.081335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.081351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.081372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.081391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.081419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.081435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.789 [2024-07-15 16:05:51.081456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.789 [2024-07-15 16:05:51.081472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.081977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.081993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.082014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.082031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.082053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.082070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.082092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.082108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.082131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.790 [2024-07-15 16:05:51.082146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.082184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.082201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.082223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.082238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.082263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.082279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.790 [2024-07-15 16:05:51.083973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.790 [2024-07-15 16:05:51.083991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.084974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.084996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.085013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.085036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.085058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.085080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.085097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.085120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.085136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.085717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.085741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.085768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.085786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.085817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.085835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.085858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.085874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.085908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.085930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.085952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.791 [2024-07-15 16:05:51.085969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.085991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.791 [2024-07-15 16:05:51.086007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.086029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.791 [2024-07-15 16:05:51.086045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.086068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.791 [2024-07-15 16:05:51.086084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.086111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.791 [2024-07-15 16:05:51.086129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.086151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.791 [2024-07-15 16:05:51.086179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.086201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.791 [2024-07-15 16:05:51.086232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.086254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.791 [2024-07-15 16:05:51.086271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.791 [2024-07-15 16:05:51.086291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.791 [2024-07-15 16:05:51.086308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.086352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.086407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.086447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.086482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.086526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.086561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.086597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.792 [2024-07-15 16:05:51.086633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.792 [2024-07-15 16:05:51.086669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.792 [2024-07-15 16:05:51.086704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.792 [2024-07-15 16:05:51.086739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.792 [2024-07-15 16:05:51.086777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.086817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.086854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.086907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.086924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.094751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.094781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.094805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.094821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.094843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.094891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.094918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.094935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.094957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.094974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.094995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.792 [2024-07-15 16:05:51.095812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.792 [2024-07-15 16:05:51.095826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.095846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.095886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.095913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.095930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.095952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.095967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.095989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.793 [2024-07-15 16:05:51.096549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.793 [2024-07-15 16:05:51.096585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.793 [2024-07-15 16:05:51.096625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.793 [2024-07-15 16:05:51.096661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.096682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.793 [2024-07-15 16:05:51.096696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.097512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.793 [2024-07-15 16:05:51.097537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.097564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.793 [2024-07-15 16:05:51.097582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.097605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.793 [2024-07-15 16:05:51.097622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.097643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.793 [2024-07-15 16:05:51.097659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.097681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.793 [2024-07-15 16:05:51.097697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.097719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.793 [2024-07-15 16:05:51.097735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.097757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.793 [2024-07-15 16:05:51.097772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.793 [2024-07-15 16:05:51.097794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.793 [2024-07-15 16:05:51.097810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.097832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.097853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.097908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.097953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.097977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.097994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.098973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.098995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.099553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.099569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.794 [2024-07-15 16:05:51.100122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.794 [2024-07-15 16:05:51.100146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.795 [2024-07-15 16:05:51.100192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.795 [2024-07-15 16:05:51.100232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.795 [2024-07-15 16:05:51.100271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.795 [2024-07-15 16:05:51.100309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.795 [2024-07-15 16:05:51.100348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.795 [2024-07-15 16:05:51.100387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.100965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.100982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.795 [2024-07-15 16:05:51.101066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.795 [2024-07-15 16:05:51.101104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.795 [2024-07-15 16:05:51.101143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.795 [2024-07-15 16:05:51.101197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.795 [2024-07-15 16:05:51.101250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.795 [2024-07-15 16:05:51.101898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.795 [2024-07-15 16:05:51.101917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.101939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.101955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.101978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.101994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.102968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.102989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.103005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.103026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.103046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.103069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.103085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.103107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.103122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.103144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.103174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.103197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.796 [2024-07-15 16:05:51.103213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.103249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.796 [2024-07-15 16:05:51.103265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.103287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.796 [2024-07-15 16:05:51.103304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.103332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.796 [2024-07-15 16:05:51.103349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.104101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.796 [2024-07-15 16:05:51.104125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.104152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.796 [2024-07-15 16:05:51.104171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.104193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.796 [2024-07-15 16:05:51.104210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.104232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.796 [2024-07-15 16:05:51.104249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.104271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.796 [2024-07-15 16:05:51.104288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.104315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.796 [2024-07-15 16:05:51.104332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.796 [2024-07-15 16:05:51.104355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.104972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.104989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.797 [2024-07-15 16:05:51.105804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.797 [2024-07-15 16:05:51.105826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.105847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.105870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.105903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.105928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.105944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.105966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.105983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.106005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.106022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.106044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.106061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.106083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.106099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.106122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.106138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.106692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.106715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.106742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.106760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.106782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.106799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.106821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.106837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.106860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.106885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.106915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.106933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.106955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.106972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.106994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.107010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.107561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.107592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.114957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.114990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.115015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.115033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.115055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.115072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.115093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.115111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.115134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.115149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.115173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.798 [2024-07-15 16:05:51.115189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.115227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.115243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.115270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.115288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.115309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.115326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.115347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.115362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.115384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.115399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.798 [2024-07-15 16:05:51.115420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.798 [2024-07-15 16:05:51.115435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.115979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.115996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.116964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.116980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.117002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.117018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.117040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.117056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.117078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.799 [2024-07-15 16:05:51.117094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.799 [2024-07-15 16:05:51.117117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.799 [2024-07-15 16:05:51.117134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.117158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.117174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.117212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.800 [2024-07-15 16:05:51.117228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.117558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.117583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.117635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.117658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.117687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.117705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.117733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.117750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.117778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.117795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.117823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.117840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.117870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.117896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.117926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.117943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.117971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.117988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.118958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.118976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.119004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.119020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.119050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.119067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.119095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.119112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.119140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.119157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.119185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.119217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.119245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.119261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.119288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.119304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.119339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.119356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.119383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.119399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.800 [2024-07-15 16:05:51.119426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.800 [2024-07-15 16:05:51.119443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:05:51.119470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:05:51.119487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:05:51.119514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:05:51.119545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:05:51.119572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:05:51.119588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:05:51.119631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:05:51.119648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:05:51.119676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:05:51.119692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:05:51.119719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:05:51.119736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:05:51.119763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:05:51.119781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:05:51.119808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:05:51.119825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:05:51.119852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:05:51.119891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:05:51.120041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:05:51.120063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.785970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.785991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.786007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.786030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.801 [2024-07-15 16:06:06.786046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.787562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.787587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.787615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.787634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.787657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.787674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.787695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.787712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.787734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.801 [2024-07-15 16:06:06.787750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.801 [2024-07-15 16:06:06.787771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.787788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.787809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.787831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.787855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.787871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.787919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.787936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.787959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.787989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.802 [2024-07-15 16:06:06.788324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.802 [2024-07-15 16:06:06.788362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.802 [2024-07-15 16:06:06.788406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.788970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.788985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.789022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.789058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.789094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.789130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.789182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.789219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.789254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.789290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.789325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.789365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.802 [2024-07-15 16:06:06.789400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.789435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.802 [2024-07-15 16:06:06.789455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.802 [2024-07-15 16:06:06.789471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.802 Received shutdown signal, test time was about 32.698967 seconds 00:24:42.802 00:24:42.803 Latency(us) 00:24:42.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.803 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:42.803 Verification LBA range: start 0x0 length 0x4000 00:24:42.803 Nvme0n1 : 32.70 7575.89 29.59 0.00 0.00 16862.90 254.86 4076242.11 00:24:42.803 =================================================================================================================== 00:24:42.803 Total : 7575.89 29.59 0.00 0.00 16862.90 254.86 4076242.11 00:24:42.803 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:43.061 rmmod nvme_tcp 00:24:43.061 rmmod nvme_fabrics 00:24:43.061 rmmod nvme_keyring 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1229164 ']' 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1229164 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1229164 ']' 00:24:43.061 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1229164 00:24:43.062 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:43.062 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.062 16:06:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1229164 00:24:43.319 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:43.319 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:43.319 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1229164' 00:24:43.319 killing process with pid 1229164 00:24:43.319 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1229164 00:24:43.319 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1229164 00:24:43.580 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:43.580 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:43.580 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:43.580 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.580 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.580 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.580 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.580 16:06:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.499 16:06:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:45.499 00:24:45.499 real 0m41.540s 00:24:45.499 user 2m5.468s 00:24:45.499 sys 0m10.577s 00:24:45.499 16:06:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:45.499 16:06:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:45.499 ************************************ 00:24:45.499 END TEST nvmf_host_multipath_status 00:24:45.499 ************************************ 00:24:45.499 16:06:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:45.499 16:06:12 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:45.499 16:06:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:45.499 16:06:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.499 16:06:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:45.499 ************************************ 00:24:45.499 START TEST nvmf_discovery_remove_ifc 00:24:45.499 ************************************ 00:24:45.499 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:45.758 * Looking for test storage... 00:24:45.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.758 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:45.759 16:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:47.682 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:47.682 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:47.682 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:47.683 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:47.683 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:47.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:24:47.683 00:24:47.683 --- 10.0.0.2 ping statistics --- 00:24:47.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.683 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:24:47.683 00:24:47.683 --- 10.0.0.1 ping statistics --- 00:24:47.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.683 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1236155 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1236155 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1236155 ']' 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.683 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.942 [2024-07-15 16:06:14.628533] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:24:47.942 [2024-07-15 16:06:14.628626] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.942 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.942 [2024-07-15 16:06:14.691993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.942 [2024-07-15 16:06:14.800517] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.942 [2024-07-15 16:06:14.800579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.942 [2024-07-15 16:06:14.800607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.942 [2024-07-15 16:06:14.800619] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.942 [2024-07-15 16:06:14.800630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.942 [2024-07-15 16:06:14.800659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.200 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.200 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:48.200 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:48.200 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:48.200 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.200 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.200 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:48.200 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.200 16:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.200 [2024-07-15 16:06:14.952480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.200 [2024-07-15 16:06:14.960694] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:48.200 null0 00:24:48.200 [2024-07-15 16:06:14.992597] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.200 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.200 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1236293 00:24:48.200 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1236293 /tmp/host.sock 00:24:48.200 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:48.200 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1236293 ']' 00:24:48.200 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:48.200 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.200 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:48.200 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:48.200 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.200 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.200 [2024-07-15 16:06:15.060238] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:24:48.200 [2024-07-15 16:06:15.060313] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236293 ] 00:24:48.200 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.200 [2024-07-15 16:06:15.118576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.459 [2024-07-15 16:06:15.227317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.459 16:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.837 [2024-07-15 16:06:16.420803] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:49.837 [2024-07-15 16:06:16.420848] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:49.837 [2024-07-15 16:06:16.420884] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:49.837 [2024-07-15 16:06:16.548328] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:49.837 [2024-07-15 16:06:16.611869] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:49.837 [2024-07-15 16:06:16.611957] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:49.837 [2024-07-15 16:06:16.611996] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:49.838 [2024-07-15 16:06:16.612020] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:49.838 [2024-07-15 16:06:16.612054] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.838 [2024-07-15 16:06:16.618803] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc2c870 was disconnected and freed. delete nvme_qpair. 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:49.838 16:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:51.219 16:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:51.219 16:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.219 16:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.219 16:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:51.219 16:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.219 16:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:51.219 16:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:51.219 16:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.219 16:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:51.219 16:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:52.154 16:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.154 16:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.154 16:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.154 16:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.154 16:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.154 16:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.154 16:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:52.154 16:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.154 16:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:52.154 16:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:53.089 16:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:53.089 16:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.089 16:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:53.089 16:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.089 16:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.089 16:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:53.089 16:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.089 16:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.089 16:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:53.089 16:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:54.024 16:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:54.024 16:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.024 16:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:54.024 16:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.024 16:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.024 16:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:54.024 16:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:54.024 16:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.024 16:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:54.024 16:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:55.401 16:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.401 16:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.401 16:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.401 16:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.401 16:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.401 16:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.401 16:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.401 16:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.401 16:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:55.401 16:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:55.401 [2024-07-15 16:06:22.052973] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:55.401 [2024-07-15 16:06:22.053058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.401 [2024-07-15 16:06:22.053080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.401 [2024-07-15 16:06:22.053099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.401 [2024-07-15 16:06:22.053112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.401 [2024-07-15 16:06:22.053126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.401 [2024-07-15 16:06:22.053138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.401 [2024-07-15 16:06:22.053151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.401 [2024-07-15 16:06:22.053164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.401 [2024-07-15 16:06:22.053184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.401 [2024-07-15 16:06:22.053214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.401 [2024-07-15 16:06:22.053229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf3300 is same with the state(5) to be set 00:24:55.401 [2024-07-15 16:06:22.062988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf3300 (9): Bad file descriptor 00:24:55.401 [2024-07-15 16:06:22.073035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:56.337 16:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.337 16:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.337 16:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.337 16:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.337 16:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.337 16:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.337 16:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.337 [2024-07-15 16:06:23.122927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:56.337 [2024-07-15 16:06:23.122991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf3300 with addr=10.0.0.2, port=4420 00:24:56.337 [2024-07-15 16:06:23.123019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf3300 is same with the state(5) to be set 00:24:56.337 [2024-07-15 16:06:23.123069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf3300 (9): Bad file descriptor 00:24:56.337 [2024-07-15 16:06:23.123545] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.337 [2024-07-15 16:06:23.123581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:56.337 [2024-07-15 16:06:23.123599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:56.337 [2024-07-15 16:06:23.123617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:56.337 [2024-07-15 16:06:23.123648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.337 [2024-07-15 16:06:23.123668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:56.337 16:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.337 16:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:56.337 16:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.273 [2024-07-15 16:06:24.126188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:57.273 [2024-07-15 16:06:24.126218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:57.273 [2024-07-15 16:06:24.126234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:57.273 [2024-07-15 16:06:24.126248] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:57.273 [2024-07-15 16:06:24.126270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.273 [2024-07-15 16:06:24.126315] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:57.273 [2024-07-15 16:06:24.126356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.273 [2024-07-15 16:06:24.126388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.273 [2024-07-15 16:06:24.126410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.273 [2024-07-15 16:06:24.126425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.273 [2024-07-15 16:06:24.126441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.273 [2024-07-15 16:06:24.126455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.273 [2024-07-15 16:06:24.126470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.273 [2024-07-15 16:06:24.126484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.273 [2024-07-15 16:06:24.126499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.273 [2024-07-15 16:06:24.126514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.273 [2024-07-15 16:06:24.126528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:57.273 [2024-07-15 16:06:24.126709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf2780 (9): Bad file descriptor 00:24:57.273 [2024-07-15 16:06:24.127734] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:57.273 [2024-07-15 16:06:24.127760] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:57.273 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:57.273 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.273 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:57.273 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.273 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.273 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:57.273 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:57.273 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.273 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:57.274 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.274 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.533 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:57.533 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:57.533 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.533 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.533 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:57.533 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.533 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:57.533 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:57.533 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.533 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:57.533 16:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:58.473 16:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:58.473 16:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.473 16:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.473 16:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:58.473 16:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.473 16:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:58.473 16:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:58.473 16:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.473 16:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:58.473 16:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:59.410 [2024-07-15 16:06:26.186830] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:59.410 [2024-07-15 16:06:26.186866] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:59.410 [2024-07-15 16:06:26.186900] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:59.410 [2024-07-15 16:06:26.315341] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:59.410 16:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.410 16:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.410 16:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.410 16:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.410 16:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.410 16:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.410 16:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.410 16:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.670 16:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:59.670 16:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:59.670 [2024-07-15 16:06:26.418347] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:59.670 [2024-07-15 16:06:26.418399] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:59.670 [2024-07-15 16:06:26.418431] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:59.670 [2024-07-15 16:06:26.418454] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:59.670 [2024-07-15 16:06:26.418467] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:59.670 [2024-07-15 16:06:26.425081] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbfa110 was disconnected and freed. delete nvme_qpair. 00:25:00.608 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1236293 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1236293 ']' 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1236293 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1236293 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1236293' 00:25:00.609 killing process with pid 1236293 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1236293 00:25:00.609 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1236293 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:00.868 rmmod nvme_tcp 00:25:00.868 rmmod nvme_fabrics 00:25:00.868 rmmod nvme_keyring 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1236155 ']' 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1236155 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1236155 ']' 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1236155 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1236155 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1236155' 00:25:00.868 killing process with pid 1236155 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1236155 00:25:00.868 16:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1236155 00:25:01.437 16:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:01.437 16:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:01.437 16:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:01.437 16:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.437 16:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.437 16:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.437 16:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.437 16:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.388 16:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.388 00:25:03.388 real 0m17.716s 00:25:03.388 user 0m25.614s 00:25:03.388 sys 0m3.075s 00:25:03.388 16:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:03.388 16:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.388 ************************************ 00:25:03.388 END TEST nvmf_discovery_remove_ifc 00:25:03.388 ************************************ 00:25:03.388 16:06:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:03.388 16:06:30 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:03.388 16:06:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:03.388 16:06:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.388 16:06:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.388 ************************************ 00:25:03.388 START TEST nvmf_identify_kernel_target 00:25:03.388 ************************************ 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:03.388 * Looking for test storage... 00:25:03.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.388 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.389 16:06:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.292 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.292 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:05.292 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:05.292 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:05.292 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:05.292 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:05.292 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:05.292 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:05.292 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:05.293 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.550 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.550 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.550 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.550 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.550 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:05.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:25:05.551 00:25:05.551 --- 10.0.0.2 ping statistics --- 00:25:05.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.551 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:25:05.551 00:25:05.551 --- 10.0.0.1 ping statistics --- 00:25:05.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.551 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:05.551 16:06:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:06.927 Waiting for block devices as requested 00:25:06.927 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:25:06.927 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:06.927 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:07.186 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:07.186 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:07.186 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:07.186 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:07.186 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:07.444 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:07.444 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:07.444 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:07.444 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:07.706 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:07.706 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:07.706 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:07.706 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:07.967 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:07.967 No valid GPT data, bailing 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:07.967 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:25:08.227 00:25:08.227 Discovery Log Number of Records 2, Generation counter 2 00:25:08.227 =====Discovery Log Entry 0====== 00:25:08.227 trtype: tcp 00:25:08.227 adrfam: ipv4 00:25:08.227 subtype: current discovery subsystem 00:25:08.227 treq: not specified, sq flow control disable supported 00:25:08.227 portid: 1 00:25:08.227 trsvcid: 4420 00:25:08.227 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:08.227 traddr: 10.0.0.1 00:25:08.227 eflags: none 00:25:08.227 sectype: none 00:25:08.227 =====Discovery Log Entry 1====== 00:25:08.227 trtype: tcp 00:25:08.227 adrfam: ipv4 00:25:08.227 subtype: nvme subsystem 00:25:08.227 treq: not specified, sq flow control disable supported 00:25:08.227 portid: 1 00:25:08.227 trsvcid: 4420 00:25:08.227 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:08.227 traddr: 10.0.0.1 00:25:08.227 eflags: none 00:25:08.227 sectype: none 00:25:08.227 16:06:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:08.227 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:08.227 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.227 ===================================================== 00:25:08.227 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:08.227 ===================================================== 00:25:08.227 Controller Capabilities/Features 00:25:08.227 ================================ 00:25:08.227 Vendor ID: 0000 00:25:08.227 Subsystem Vendor ID: 0000 00:25:08.227 Serial Number: ec6bef53a015ccbc1de3 00:25:08.227 Model Number: Linux 00:25:08.227 Firmware Version: 6.7.0-68 00:25:08.227 Recommended Arb Burst: 0 00:25:08.227 IEEE OUI Identifier: 00 00 00 00:25:08.227 Multi-path I/O 00:25:08.227 May have multiple subsystem ports: No 00:25:08.227 May have multiple controllers: No 00:25:08.227 Associated with SR-IOV VF: No 00:25:08.227 Max Data Transfer Size: Unlimited 00:25:08.227 Max Number of Namespaces: 0 00:25:08.227 Max Number of I/O Queues: 1024 00:25:08.227 NVMe Specification Version (VS): 1.3 00:25:08.227 NVMe Specification Version (Identify): 1.3 00:25:08.227 Maximum Queue Entries: 1024 00:25:08.227 Contiguous Queues Required: No 00:25:08.227 Arbitration Mechanisms Supported 00:25:08.227 Weighted Round Robin: Not Supported 00:25:08.227 Vendor Specific: Not Supported 00:25:08.227 Reset Timeout: 7500 ms 00:25:08.227 Doorbell Stride: 4 bytes 00:25:08.227 NVM Subsystem Reset: Not Supported 00:25:08.227 Command Sets Supported 00:25:08.227 NVM Command Set: Supported 00:25:08.227 Boot Partition: Not Supported 00:25:08.227 Memory Page Size Minimum: 4096 bytes 00:25:08.227 Memory Page Size Maximum: 4096 bytes 00:25:08.227 Persistent Memory Region: Not Supported 00:25:08.227 Optional Asynchronous Events Supported 00:25:08.227 Namespace Attribute Notices: Not Supported 00:25:08.227 Firmware Activation Notices: Not Supported 00:25:08.227 ANA Change Notices: Not Supported 00:25:08.227 PLE Aggregate Log Change Notices: Not Supported 00:25:08.227 LBA Status Info Alert Notices: Not Supported 00:25:08.227 EGE Aggregate Log Change Notices: Not Supported 00:25:08.227 Normal NVM Subsystem Shutdown event: Not Supported 00:25:08.227 Zone Descriptor Change Notices: Not Supported 00:25:08.227 Discovery Log Change Notices: Supported 00:25:08.227 Controller Attributes 00:25:08.227 128-bit Host Identifier: Not Supported 00:25:08.227 Non-Operational Permissive Mode: Not Supported 00:25:08.227 NVM Sets: Not Supported 00:25:08.227 Read Recovery Levels: Not Supported 00:25:08.227 Endurance Groups: Not Supported 00:25:08.228 Predictable Latency Mode: Not Supported 00:25:08.228 Traffic Based Keep ALive: Not Supported 00:25:08.228 Namespace Granularity: Not Supported 00:25:08.228 SQ Associations: Not Supported 00:25:08.228 UUID List: Not Supported 00:25:08.228 Multi-Domain Subsystem: Not Supported 00:25:08.228 Fixed Capacity Management: Not Supported 00:25:08.228 Variable Capacity Management: Not Supported 00:25:08.228 Delete Endurance Group: Not Supported 00:25:08.228 Delete NVM Set: Not Supported 00:25:08.228 Extended LBA Formats Supported: Not Supported 00:25:08.228 Flexible Data Placement Supported: Not Supported 00:25:08.228 00:25:08.228 Controller Memory Buffer Support 00:25:08.228 ================================ 00:25:08.228 Supported: No 00:25:08.228 00:25:08.228 Persistent Memory Region Support 00:25:08.228 ================================ 00:25:08.228 Supported: No 00:25:08.228 00:25:08.228 Admin Command Set Attributes 00:25:08.228 ============================ 00:25:08.228 Security Send/Receive: Not Supported 00:25:08.228 Format NVM: Not Supported 00:25:08.228 Firmware Activate/Download: Not Supported 00:25:08.228 Namespace Management: Not Supported 00:25:08.228 Device Self-Test: Not Supported 00:25:08.228 Directives: Not Supported 00:25:08.228 NVMe-MI: Not Supported 00:25:08.228 Virtualization Management: Not Supported 00:25:08.228 Doorbell Buffer Config: Not Supported 00:25:08.228 Get LBA Status Capability: Not Supported 00:25:08.228 Command & Feature Lockdown Capability: Not Supported 00:25:08.228 Abort Command Limit: 1 00:25:08.228 Async Event Request Limit: 1 00:25:08.228 Number of Firmware Slots: N/A 00:25:08.228 Firmware Slot 1 Read-Only: N/A 00:25:08.228 Firmware Activation Without Reset: N/A 00:25:08.228 Multiple Update Detection Support: N/A 00:25:08.228 Firmware Update Granularity: No Information Provided 00:25:08.228 Per-Namespace SMART Log: No 00:25:08.228 Asymmetric Namespace Access Log Page: Not Supported 00:25:08.228 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:08.228 Command Effects Log Page: Not Supported 00:25:08.228 Get Log Page Extended Data: Supported 00:25:08.228 Telemetry Log Pages: Not Supported 00:25:08.228 Persistent Event Log Pages: Not Supported 00:25:08.228 Supported Log Pages Log Page: May Support 00:25:08.228 Commands Supported & Effects Log Page: Not Supported 00:25:08.228 Feature Identifiers & Effects Log Page:May Support 00:25:08.228 NVMe-MI Commands & Effects Log Page: May Support 00:25:08.228 Data Area 4 for Telemetry Log: Not Supported 00:25:08.228 Error Log Page Entries Supported: 1 00:25:08.228 Keep Alive: Not Supported 00:25:08.228 00:25:08.228 NVM Command Set Attributes 00:25:08.228 ========================== 00:25:08.228 Submission Queue Entry Size 00:25:08.228 Max: 1 00:25:08.228 Min: 1 00:25:08.228 Completion Queue Entry Size 00:25:08.228 Max: 1 00:25:08.228 Min: 1 00:25:08.228 Number of Namespaces: 0 00:25:08.228 Compare Command: Not Supported 00:25:08.228 Write Uncorrectable Command: Not Supported 00:25:08.228 Dataset Management Command: Not Supported 00:25:08.228 Write Zeroes Command: Not Supported 00:25:08.228 Set Features Save Field: Not Supported 00:25:08.228 Reservations: Not Supported 00:25:08.228 Timestamp: Not Supported 00:25:08.228 Copy: Not Supported 00:25:08.228 Volatile Write Cache: Not Present 00:25:08.228 Atomic Write Unit (Normal): 1 00:25:08.228 Atomic Write Unit (PFail): 1 00:25:08.228 Atomic Compare & Write Unit: 1 00:25:08.228 Fused Compare & Write: Not Supported 00:25:08.228 Scatter-Gather List 00:25:08.228 SGL Command Set: Supported 00:25:08.228 SGL Keyed: Not Supported 00:25:08.228 SGL Bit Bucket Descriptor: Not Supported 00:25:08.228 SGL Metadata Pointer: Not Supported 00:25:08.228 Oversized SGL: Not Supported 00:25:08.228 SGL Metadata Address: Not Supported 00:25:08.228 SGL Offset: Supported 00:25:08.228 Transport SGL Data Block: Not Supported 00:25:08.228 Replay Protected Memory Block: Not Supported 00:25:08.228 00:25:08.228 Firmware Slot Information 00:25:08.228 ========================= 00:25:08.228 Active slot: 0 00:25:08.228 00:25:08.228 00:25:08.228 Error Log 00:25:08.228 ========= 00:25:08.228 00:25:08.228 Active Namespaces 00:25:08.228 ================= 00:25:08.228 Discovery Log Page 00:25:08.228 ================== 00:25:08.228 Generation Counter: 2 00:25:08.228 Number of Records: 2 00:25:08.228 Record Format: 0 00:25:08.228 00:25:08.228 Discovery Log Entry 0 00:25:08.228 ---------------------- 00:25:08.228 Transport Type: 3 (TCP) 00:25:08.228 Address Family: 1 (IPv4) 00:25:08.228 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:08.228 Entry Flags: 00:25:08.228 Duplicate Returned Information: 0 00:25:08.228 Explicit Persistent Connection Support for Discovery: 0 00:25:08.228 Transport Requirements: 00:25:08.228 Secure Channel: Not Specified 00:25:08.228 Port ID: 1 (0x0001) 00:25:08.228 Controller ID: 65535 (0xffff) 00:25:08.228 Admin Max SQ Size: 32 00:25:08.228 Transport Service Identifier: 4420 00:25:08.228 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:08.228 Transport Address: 10.0.0.1 00:25:08.228 Discovery Log Entry 1 00:25:08.228 ---------------------- 00:25:08.228 Transport Type: 3 (TCP) 00:25:08.228 Address Family: 1 (IPv4) 00:25:08.228 Subsystem Type: 2 (NVM Subsystem) 00:25:08.228 Entry Flags: 00:25:08.228 Duplicate Returned Information: 0 00:25:08.228 Explicit Persistent Connection Support for Discovery: 0 00:25:08.228 Transport Requirements: 00:25:08.228 Secure Channel: Not Specified 00:25:08.228 Port ID: 1 (0x0001) 00:25:08.228 Controller ID: 65535 (0xffff) 00:25:08.228 Admin Max SQ Size: 32 00:25:08.228 Transport Service Identifier: 4420 00:25:08.228 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:08.228 Transport Address: 10.0.0.1 00:25:08.228 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:08.228 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.228 get_feature(0x01) failed 00:25:08.228 get_feature(0x02) failed 00:25:08.228 get_feature(0x04) failed 00:25:08.228 ===================================================== 00:25:08.228 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:08.228 ===================================================== 00:25:08.228 Controller Capabilities/Features 00:25:08.228 ================================ 00:25:08.228 Vendor ID: 0000 00:25:08.228 Subsystem Vendor ID: 0000 00:25:08.228 Serial Number: 572984651194004a9ebc 00:25:08.228 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:08.228 Firmware Version: 6.7.0-68 00:25:08.228 Recommended Arb Burst: 6 00:25:08.228 IEEE OUI Identifier: 00 00 00 00:25:08.228 Multi-path I/O 00:25:08.228 May have multiple subsystem ports: Yes 00:25:08.228 May have multiple controllers: Yes 00:25:08.228 Associated with SR-IOV VF: No 00:25:08.228 Max Data Transfer Size: Unlimited 00:25:08.228 Max Number of Namespaces: 1024 00:25:08.228 Max Number of I/O Queues: 128 00:25:08.228 NVMe Specification Version (VS): 1.3 00:25:08.228 NVMe Specification Version (Identify): 1.3 00:25:08.228 Maximum Queue Entries: 1024 00:25:08.228 Contiguous Queues Required: No 00:25:08.228 Arbitration Mechanisms Supported 00:25:08.228 Weighted Round Robin: Not Supported 00:25:08.228 Vendor Specific: Not Supported 00:25:08.228 Reset Timeout: 7500 ms 00:25:08.228 Doorbell Stride: 4 bytes 00:25:08.228 NVM Subsystem Reset: Not Supported 00:25:08.228 Command Sets Supported 00:25:08.228 NVM Command Set: Supported 00:25:08.228 Boot Partition: Not Supported 00:25:08.228 Memory Page Size Minimum: 4096 bytes 00:25:08.228 Memory Page Size Maximum: 4096 bytes 00:25:08.228 Persistent Memory Region: Not Supported 00:25:08.228 Optional Asynchronous Events Supported 00:25:08.228 Namespace Attribute Notices: Supported 00:25:08.228 Firmware Activation Notices: Not Supported 00:25:08.228 ANA Change Notices: Supported 00:25:08.228 PLE Aggregate Log Change Notices: Not Supported 00:25:08.228 LBA Status Info Alert Notices: Not Supported 00:25:08.228 EGE Aggregate Log Change Notices: Not Supported 00:25:08.228 Normal NVM Subsystem Shutdown event: Not Supported 00:25:08.228 Zone Descriptor Change Notices: Not Supported 00:25:08.228 Discovery Log Change Notices: Not Supported 00:25:08.228 Controller Attributes 00:25:08.228 128-bit Host Identifier: Supported 00:25:08.228 Non-Operational Permissive Mode: Not Supported 00:25:08.228 NVM Sets: Not Supported 00:25:08.228 Read Recovery Levels: Not Supported 00:25:08.228 Endurance Groups: Not Supported 00:25:08.228 Predictable Latency Mode: Not Supported 00:25:08.228 Traffic Based Keep ALive: Supported 00:25:08.229 Namespace Granularity: Not Supported 00:25:08.229 SQ Associations: Not Supported 00:25:08.229 UUID List: Not Supported 00:25:08.229 Multi-Domain Subsystem: Not Supported 00:25:08.229 Fixed Capacity Management: Not Supported 00:25:08.229 Variable Capacity Management: Not Supported 00:25:08.229 Delete Endurance Group: Not Supported 00:25:08.229 Delete NVM Set: Not Supported 00:25:08.229 Extended LBA Formats Supported: Not Supported 00:25:08.229 Flexible Data Placement Supported: Not Supported 00:25:08.229 00:25:08.229 Controller Memory Buffer Support 00:25:08.229 ================================ 00:25:08.229 Supported: No 00:25:08.229 00:25:08.229 Persistent Memory Region Support 00:25:08.229 ================================ 00:25:08.229 Supported: No 00:25:08.229 00:25:08.229 Admin Command Set Attributes 00:25:08.229 ============================ 00:25:08.229 Security Send/Receive: Not Supported 00:25:08.229 Format NVM: Not Supported 00:25:08.229 Firmware Activate/Download: Not Supported 00:25:08.229 Namespace Management: Not Supported 00:25:08.229 Device Self-Test: Not Supported 00:25:08.229 Directives: Not Supported 00:25:08.229 NVMe-MI: Not Supported 00:25:08.229 Virtualization Management: Not Supported 00:25:08.229 Doorbell Buffer Config: Not Supported 00:25:08.229 Get LBA Status Capability: Not Supported 00:25:08.229 Command & Feature Lockdown Capability: Not Supported 00:25:08.229 Abort Command Limit: 4 00:25:08.229 Async Event Request Limit: 4 00:25:08.229 Number of Firmware Slots: N/A 00:25:08.229 Firmware Slot 1 Read-Only: N/A 00:25:08.229 Firmware Activation Without Reset: N/A 00:25:08.229 Multiple Update Detection Support: N/A 00:25:08.229 Firmware Update Granularity: No Information Provided 00:25:08.229 Per-Namespace SMART Log: Yes 00:25:08.229 Asymmetric Namespace Access Log Page: Supported 00:25:08.229 ANA Transition Time : 10 sec 00:25:08.229 00:25:08.229 Asymmetric Namespace Access Capabilities 00:25:08.229 ANA Optimized State : Supported 00:25:08.229 ANA Non-Optimized State : Supported 00:25:08.229 ANA Inaccessible State : Supported 00:25:08.229 ANA Persistent Loss State : Supported 00:25:08.229 ANA Change State : Supported 00:25:08.229 ANAGRPID is not changed : No 00:25:08.229 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:08.229 00:25:08.229 ANA Group Identifier Maximum : 128 00:25:08.229 Number of ANA Group Identifiers : 128 00:25:08.229 Max Number of Allowed Namespaces : 1024 00:25:08.229 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:08.229 Command Effects Log Page: Supported 00:25:08.229 Get Log Page Extended Data: Supported 00:25:08.229 Telemetry Log Pages: Not Supported 00:25:08.229 Persistent Event Log Pages: Not Supported 00:25:08.229 Supported Log Pages Log Page: May Support 00:25:08.229 Commands Supported & Effects Log Page: Not Supported 00:25:08.229 Feature Identifiers & Effects Log Page:May Support 00:25:08.229 NVMe-MI Commands & Effects Log Page: May Support 00:25:08.229 Data Area 4 for Telemetry Log: Not Supported 00:25:08.229 Error Log Page Entries Supported: 128 00:25:08.229 Keep Alive: Supported 00:25:08.229 Keep Alive Granularity: 1000 ms 00:25:08.229 00:25:08.229 NVM Command Set Attributes 00:25:08.229 ========================== 00:25:08.229 Submission Queue Entry Size 00:25:08.229 Max: 64 00:25:08.229 Min: 64 00:25:08.229 Completion Queue Entry Size 00:25:08.229 Max: 16 00:25:08.229 Min: 16 00:25:08.229 Number of Namespaces: 1024 00:25:08.229 Compare Command: Not Supported 00:25:08.229 Write Uncorrectable Command: Not Supported 00:25:08.229 Dataset Management Command: Supported 00:25:08.229 Write Zeroes Command: Supported 00:25:08.229 Set Features Save Field: Not Supported 00:25:08.229 Reservations: Not Supported 00:25:08.229 Timestamp: Not Supported 00:25:08.229 Copy: Not Supported 00:25:08.229 Volatile Write Cache: Present 00:25:08.229 Atomic Write Unit (Normal): 1 00:25:08.229 Atomic Write Unit (PFail): 1 00:25:08.229 Atomic Compare & Write Unit: 1 00:25:08.229 Fused Compare & Write: Not Supported 00:25:08.229 Scatter-Gather List 00:25:08.229 SGL Command Set: Supported 00:25:08.229 SGL Keyed: Not Supported 00:25:08.229 SGL Bit Bucket Descriptor: Not Supported 00:25:08.229 SGL Metadata Pointer: Not Supported 00:25:08.229 Oversized SGL: Not Supported 00:25:08.229 SGL Metadata Address: Not Supported 00:25:08.229 SGL Offset: Supported 00:25:08.229 Transport SGL Data Block: Not Supported 00:25:08.229 Replay Protected Memory Block: Not Supported 00:25:08.229 00:25:08.229 Firmware Slot Information 00:25:08.229 ========================= 00:25:08.229 Active slot: 0 00:25:08.229 00:25:08.229 Asymmetric Namespace Access 00:25:08.229 =========================== 00:25:08.229 Change Count : 0 00:25:08.229 Number of ANA Group Descriptors : 1 00:25:08.229 ANA Group Descriptor : 0 00:25:08.229 ANA Group ID : 1 00:25:08.229 Number of NSID Values : 1 00:25:08.229 Change Count : 0 00:25:08.229 ANA State : 1 00:25:08.229 Namespace Identifier : 1 00:25:08.229 00:25:08.229 Commands Supported and Effects 00:25:08.229 ============================== 00:25:08.229 Admin Commands 00:25:08.229 -------------- 00:25:08.229 Get Log Page (02h): Supported 00:25:08.229 Identify (06h): Supported 00:25:08.229 Abort (08h): Supported 00:25:08.229 Set Features (09h): Supported 00:25:08.229 Get Features (0Ah): Supported 00:25:08.229 Asynchronous Event Request (0Ch): Supported 00:25:08.229 Keep Alive (18h): Supported 00:25:08.229 I/O Commands 00:25:08.229 ------------ 00:25:08.229 Flush (00h): Supported 00:25:08.229 Write (01h): Supported LBA-Change 00:25:08.229 Read (02h): Supported 00:25:08.229 Write Zeroes (08h): Supported LBA-Change 00:25:08.229 Dataset Management (09h): Supported 00:25:08.229 00:25:08.229 Error Log 00:25:08.229 ========= 00:25:08.229 Entry: 0 00:25:08.229 Error Count: 0x3 00:25:08.229 Submission Queue Id: 0x0 00:25:08.229 Command Id: 0x5 00:25:08.229 Phase Bit: 0 00:25:08.229 Status Code: 0x2 00:25:08.229 Status Code Type: 0x0 00:25:08.229 Do Not Retry: 1 00:25:08.229 Error Location: 0x28 00:25:08.229 LBA: 0x0 00:25:08.229 Namespace: 0x0 00:25:08.229 Vendor Log Page: 0x0 00:25:08.229 ----------- 00:25:08.229 Entry: 1 00:25:08.229 Error Count: 0x2 00:25:08.229 Submission Queue Id: 0x0 00:25:08.229 Command Id: 0x5 00:25:08.229 Phase Bit: 0 00:25:08.229 Status Code: 0x2 00:25:08.229 Status Code Type: 0x0 00:25:08.229 Do Not Retry: 1 00:25:08.229 Error Location: 0x28 00:25:08.229 LBA: 0x0 00:25:08.229 Namespace: 0x0 00:25:08.229 Vendor Log Page: 0x0 00:25:08.229 ----------- 00:25:08.229 Entry: 2 00:25:08.229 Error Count: 0x1 00:25:08.229 Submission Queue Id: 0x0 00:25:08.229 Command Id: 0x4 00:25:08.229 Phase Bit: 0 00:25:08.229 Status Code: 0x2 00:25:08.229 Status Code Type: 0x0 00:25:08.229 Do Not Retry: 1 00:25:08.229 Error Location: 0x28 00:25:08.229 LBA: 0x0 00:25:08.229 Namespace: 0x0 00:25:08.229 Vendor Log Page: 0x0 00:25:08.229 00:25:08.229 Number of Queues 00:25:08.229 ================ 00:25:08.229 Number of I/O Submission Queues: 128 00:25:08.229 Number of I/O Completion Queues: 128 00:25:08.229 00:25:08.229 ZNS Specific Controller Data 00:25:08.229 ============================ 00:25:08.229 Zone Append Size Limit: 0 00:25:08.229 00:25:08.229 00:25:08.229 Active Namespaces 00:25:08.229 ================= 00:25:08.229 get_feature(0x05) failed 00:25:08.229 Namespace ID:1 00:25:08.229 Command Set Identifier: NVM (00h) 00:25:08.229 Deallocate: Supported 00:25:08.229 Deallocated/Unwritten Error: Not Supported 00:25:08.229 Deallocated Read Value: Unknown 00:25:08.229 Deallocate in Write Zeroes: Not Supported 00:25:08.229 Deallocated Guard Field: 0xFFFF 00:25:08.229 Flush: Supported 00:25:08.229 Reservation: Not Supported 00:25:08.229 Namespace Sharing Capabilities: Multiple Controllers 00:25:08.229 Size (in LBAs): 1953525168 (931GiB) 00:25:08.229 Capacity (in LBAs): 1953525168 (931GiB) 00:25:08.229 Utilization (in LBAs): 1953525168 (931GiB) 00:25:08.229 UUID: 78066a66-9308-48d4-b638-902766b9d6ce 00:25:08.229 Thin Provisioning: Not Supported 00:25:08.229 Per-NS Atomic Units: Yes 00:25:08.229 Atomic Boundary Size (Normal): 0 00:25:08.229 Atomic Boundary Size (PFail): 0 00:25:08.229 Atomic Boundary Offset: 0 00:25:08.229 NGUID/EUI64 Never Reused: No 00:25:08.229 ANA group ID: 1 00:25:08.229 Namespace Write Protected: No 00:25:08.229 Number of LBA Formats: 1 00:25:08.229 Current LBA Format: LBA Format #00 00:25:08.229 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:08.229 00:25:08.229 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:08.229 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:08.229 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:08.229 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.229 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:08.230 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.230 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.230 rmmod nvme_tcp 00:25:08.489 rmmod nvme_fabrics 00:25:08.489 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.489 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:08.489 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:08.489 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:08.489 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:08.489 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:08.489 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:08.489 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.489 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.489 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.489 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.489 16:06:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.392 16:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:10.392 16:06:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:10.392 16:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:10.392 16:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:10.392 16:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:10.392 16:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:10.392 16:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:10.392 16:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:10.392 16:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:10.392 16:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:10.392 16:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:11.767 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:11.767 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:11.767 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:11.767 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:11.767 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:11.767 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:11.767 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:11.767 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:11.767 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:11.767 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:11.767 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:11.767 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:11.767 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:11.767 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:11.767 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:11.767 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:12.742 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:25:12.742 00:25:12.742 real 0m9.496s 00:25:12.742 user 0m2.084s 00:25:12.742 sys 0m3.398s 00:25:12.742 16:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:12.742 16:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.742 ************************************ 00:25:12.742 END TEST nvmf_identify_kernel_target 00:25:12.742 ************************************ 00:25:13.001 16:06:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:13.001 16:06:39 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:13.001 16:06:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:13.001 16:06:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.001 16:06:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.001 ************************************ 00:25:13.001 START TEST nvmf_auth_host 00:25:13.001 ************************************ 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:13.001 * Looking for test storage... 00:25:13.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:13.001 16:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:14.905 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:14.906 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:14.906 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:14.906 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:14.906 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:14.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:25:14.906 00:25:14.906 --- 10.0.0.2 ping statistics --- 00:25:14.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.906 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:25:14.906 00:25:14.906 --- 10.0.0.1 ping statistics --- 00:25:14.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.906 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:14.906 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1243314 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1243314 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1243314 ']' 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.165 16:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a992de86a3cd9ca722acdd6b267ca5a6 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.M8c 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a992de86a3cd9ca722acdd6b267ca5a6 0 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a992de86a3cd9ca722acdd6b267ca5a6 0 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a992de86a3cd9ca722acdd6b267ca5a6 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.M8c 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.M8c 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.M8c 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.101 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:16.102 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:16.102 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:16.102 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3ef2a5c3e2d7b708982a7b45b0c788ff7c53ccf31854df49ec2b6e8a2c91e8a5 00:25:16.102 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:16.102 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1aA 00:25:16.102 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3ef2a5c3e2d7b708982a7b45b0c788ff7c53ccf31854df49ec2b6e8a2c91e8a5 3 00:25:16.102 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3ef2a5c3e2d7b708982a7b45b0c788ff7c53ccf31854df49ec2b6e8a2c91e8a5 3 00:25:16.102 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.102 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.102 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3ef2a5c3e2d7b708982a7b45b0c788ff7c53ccf31854df49ec2b6e8a2c91e8a5 00:25:16.102 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:16.102 16:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.102 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1aA 00:25:16.102 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1aA 00:25:16.102 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.1aA 00:25:16.102 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:16.102 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.102 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.102 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.102 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:16.102 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:16.102 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.102 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fc22262989bb2b3ec7256a2638ae14da6707404769ae225b 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.U0Y 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fc22262989bb2b3ec7256a2638ae14da6707404769ae225b 0 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fc22262989bb2b3ec7256a2638ae14da6707404769ae225b 0 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fc22262989bb2b3ec7256a2638ae14da6707404769ae225b 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.U0Y 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.U0Y 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.U0Y 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aa2af5caf6e599a210cc5e53f4b690987240d6baf6a956ae 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JJs 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aa2af5caf6e599a210cc5e53f4b690987240d6baf6a956ae 2 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aa2af5caf6e599a210cc5e53f4b690987240d6baf6a956ae 2 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aa2af5caf6e599a210cc5e53f4b690987240d6baf6a956ae 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JJs 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JJs 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.JJs 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d049a4f395ccb1b3cdb954ce12deedab 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ZUC 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d049a4f395ccb1b3cdb954ce12deedab 1 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d049a4f395ccb1b3cdb954ce12deedab 1 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d049a4f395ccb1b3cdb954ce12deedab 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ZUC 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ZUC 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ZUC 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=90a2555333aa46eaf74a474191825295 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:16.360 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TAh 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 90a2555333aa46eaf74a474191825295 1 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 90a2555333aa46eaf74a474191825295 1 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=90a2555333aa46eaf74a474191825295 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TAh 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TAh 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.TAh 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=63cc1f65620f8b4503464b36b90a03f6a97c05fcc79c8e76 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.SBe 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 63cc1f65620f8b4503464b36b90a03f6a97c05fcc79c8e76 2 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 63cc1f65620f8b4503464b36b90a03f6a97c05fcc79c8e76 2 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=63cc1f65620f8b4503464b36b90a03f6a97c05fcc79c8e76 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.361 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.SBe 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.SBe 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.SBe 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=47e507c1c98e7ebaa347d2e35e74aebc 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9wS 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 47e507c1c98e7ebaa347d2e35e74aebc 0 00:25:16.619 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 47e507c1c98e7ebaa347d2e35e74aebc 0 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=47e507c1c98e7ebaa347d2e35e74aebc 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9wS 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9wS 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.9wS 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8235bf60064041fee1903ac1ba894e57dd985c95909cb5910ff6c8571fee85be 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.d7t 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8235bf60064041fee1903ac1ba894e57dd985c95909cb5910ff6c8571fee85be 3 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8235bf60064041fee1903ac1ba894e57dd985c95909cb5910ff6c8571fee85be 3 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8235bf60064041fee1903ac1ba894e57dd985c95909cb5910ff6c8571fee85be 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.d7t 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.d7t 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.d7t 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1243314 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1243314 ']' 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.620 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.M8c 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.1aA ]] 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1aA 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.U0Y 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.JJs ]] 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JJs 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ZUC 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.TAh ]] 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TAh 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.SBe 00:25:16.879 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.9wS ]] 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.9wS 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.d7t 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:16.880 16:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:17.815 Waiting for block devices as requested 00:25:18.075 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:25:18.075 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:18.343 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:18.343 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:18.343 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:18.603 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:18.603 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:18.603 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:18.603 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:18.862 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:18.862 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:18.862 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:18.862 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:18.862 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:19.121 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:19.121 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:19.121 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:19.724 No valid GPT data, bailing 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:25:19.724 00:25:19.724 Discovery Log Number of Records 2, Generation counter 2 00:25:19.724 =====Discovery Log Entry 0====== 00:25:19.724 trtype: tcp 00:25:19.724 adrfam: ipv4 00:25:19.724 subtype: current discovery subsystem 00:25:19.724 treq: not specified, sq flow control disable supported 00:25:19.724 portid: 1 00:25:19.724 trsvcid: 4420 00:25:19.724 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:19.724 traddr: 10.0.0.1 00:25:19.724 eflags: none 00:25:19.724 sectype: none 00:25:19.724 =====Discovery Log Entry 1====== 00:25:19.724 trtype: tcp 00:25:19.724 adrfam: ipv4 00:25:19.724 subtype: nvme subsystem 00:25:19.724 treq: not specified, sq flow control disable supported 00:25:19.724 portid: 1 00:25:19.724 trsvcid: 4420 00:25:19.724 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:19.724 traddr: 10.0.0.1 00:25:19.724 eflags: none 00:25:19.724 sectype: none 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.724 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.983 nvme0n1 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.983 16:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.244 nvme0n1 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.244 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.504 nvme0n1 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.504 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.505 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.764 nvme0n1 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.764 nvme0n1 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.764 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.024 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.025 nvme0n1 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.025 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.284 16:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.284 nvme0n1 00:25:21.284 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.284 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.284 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.284 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.284 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.284 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.543 nvme0n1 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.543 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.801 nvme0n1 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.801 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.059 nvme0n1 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.059 16:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.317 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.318 nvme0n1 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.318 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.578 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.579 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.839 nvme0n1 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.839 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.099 nvme0n1 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.099 16:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.099 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.099 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.099 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.099 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.099 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.099 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.099 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.099 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.099 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.099 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.099 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.099 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.100 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.100 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.100 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.670 nvme0n1 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.670 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.930 nvme0n1 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.930 16:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.188 nvme0n1 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.188 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.189 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.189 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.189 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.189 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.189 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.189 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.189 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.754 nvme0n1 00:25:24.754 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.754 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.754 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.754 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.754 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.754 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.014 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.015 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.015 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.015 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.015 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.015 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.015 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.015 16:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.015 16:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.015 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.015 16:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.584 nvme0n1 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.584 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.154 nvme0n1 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.154 16:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.723 nvme0n1 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.723 16:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.724 16:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.724 16:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.292 nvme0n1 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.292 16:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.293 16:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.293 16:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.293 16:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.293 16:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.230 nvme0n1 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.230 16:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.170 nvme0n1 00:25:29.170 16:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.170 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.170 16:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.170 16:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.170 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.170 16:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.170 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.170 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.170 16:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.170 16:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.430 16:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.431 16:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.431 16:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.431 16:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.431 16:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.431 16:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.431 16:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.431 16:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.431 16:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.431 16:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.431 16:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.372 nvme0n1 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.372 16:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.307 nvme0n1 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.307 16:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.245 nvme0n1 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.245 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.503 nvme0n1 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.503 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.504 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.788 nvme0n1 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.788 nvme0n1 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.788 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.049 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.050 nvme0n1 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.050 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.308 16:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.308 nvme0n1 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.308 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.309 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.309 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.309 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.309 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.309 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.309 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.309 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.309 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.565 nvme0n1 00:25:33.565 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.565 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.565 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.565 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.565 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.565 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.565 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.565 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.565 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.565 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.565 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.565 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.566 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.823 nvme0n1 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.823 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.081 nvme0n1 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.081 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.082 16:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.340 nvme0n1 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.340 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.601 nvme0n1 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.601 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.171 nvme0n1 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.171 16:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.431 nvme0n1 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.431 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.690 nvme0n1 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.690 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.948 nvme0n1 00:25:35.948 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.948 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.948 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.948 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.948 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.948 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.208 16:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.469 nvme0n1 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.469 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.470 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.470 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.470 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.036 nvme0n1 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.036 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.037 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.037 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.037 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.037 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.037 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.037 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.037 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.037 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.037 16:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.037 16:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.037 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.037 16:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.603 nvme0n1 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.603 16:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.170 nvme0n1 00:25:38.170 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.170 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.170 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.170 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.170 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.170 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.170 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.170 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.170 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.170 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.430 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.430 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.430 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:38.430 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.430 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.430 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.430 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.431 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.001 nvme0n1 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.001 16:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.572 nvme0n1 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.572 16:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.510 nvme0n1 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.510 16:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.511 16:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.511 16:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.511 16:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.511 16:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.447 nvme0n1 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.447 16:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.385 nvme0n1 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.385 16:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.322 nvme0n1 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.322 16:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.700 nvme0n1 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.700 nvme0n1 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.700 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.701 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.959 nvme0n1 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.959 nvme0n1 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.959 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.217 16:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.217 nvme0n1 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.217 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.486 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.487 nvme0n1 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.487 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.747 nvme0n1 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.747 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.080 nvme0n1 00:25:46.080 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.080 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.080 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.080 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.080 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.080 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.080 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.080 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.081 16:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.340 nvme0n1 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.340 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.341 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.341 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.341 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.341 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.341 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.600 nvme0n1 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.601 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.861 nvme0n1 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.861 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.119 nvme0n1 00:25:47.119 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.119 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.119 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.119 16:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.119 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.119 16:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.119 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.120 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.688 nvme0n1 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.688 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.689 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.948 nvme0n1 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.948 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.949 16:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.207 nvme0n1 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.207 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.772 nvme0n1 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.772 16:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.338 nvme0n1 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.338 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.906 nvme0n1 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.906 16:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.475 nvme0n1 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.475 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.105 nvme0n1 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.105 16:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.672 nvme0n1 00:25:51.672 16:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk5MmRlODZhM2NkOWNhNzIyYWNkZDZiMjY3Y2E1YTbmC/bD: 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: ]] 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2VmMmE1YzNlMmQ3YjcwODk4MmE3YjQ1YjBjNzg4ZmY3YzUzY2NmMzE4NTRkZjQ5ZWMyYjZlOGEyYzkxZThhNTAGwPA=: 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.673 16:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.614 nvme0n1 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.614 16:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.615 16:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.615 16:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.615 16:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.548 nvme0n1 00:25:53.548 16:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.548 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.548 16:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.548 16:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.548 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.548 16:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.807 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.807 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.807 16:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDA0OWE0ZjM5NWNjYjFiM2NkYjk1NGNlMTJkZWVkYWI9i77A: 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: ]] 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBhMjU1NTMzM2FhNDZlYWY3NGE0NzQxOTE4MjUyOTUl49mH: 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.808 16:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.742 nvme0n1 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjNjYzFmNjU2MjBmOGI0NTAzNDY0YjM2YjkwYTAzZjZhOTdjMDVmY2M3OWM4ZTc2IxkYpg==: 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: ]] 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdlNTA3YzFjOThlN2ViYWEzNDdkMmUzNWU3NGFlYmOcCrPv: 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.742 16:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.681 nvme0n1 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODIzNWJmNjAwNjQwNDFmZWUxOTAzYWMxYmE4OTRlNTdkZDk4NWM5NTkwOWNiNTkxMGZmNmM4NTcxZmVlODViZWOoQ6k=: 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.681 16:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.682 16:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.682 16:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.682 16:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.682 16:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.941 16:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.941 16:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.941 16:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.881 nvme0n1 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmMyMjI2Mjk4OWJiMmIzZWM3MjU2YTI2MzhhZTE0ZGE2NzA3NDA0NzY5YWUyMjViWUhvIw==: 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: ]] 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWEyYWY1Y2FmNmU1OTlhMjEwY2M1ZTUzZjRiNjkwOTg3MjQwZDZiYWY2YTk1NmFloHGNdw==: 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.881 request: 00:25:56.881 { 00:25:56.881 "name": "nvme0", 00:25:56.881 "trtype": "tcp", 00:25:56.881 "traddr": "10.0.0.1", 00:25:56.881 "adrfam": "ipv4", 00:25:56.881 "trsvcid": "4420", 00:25:56.881 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:56.881 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:56.881 "prchk_reftag": false, 00:25:56.881 "prchk_guard": false, 00:25:56.881 "hdgst": false, 00:25:56.881 "ddgst": false, 00:25:56.881 "method": "bdev_nvme_attach_controller", 00:25:56.881 "req_id": 1 00:25:56.881 } 00:25:56.881 Got JSON-RPC error response 00:25:56.881 response: 00:25:56.881 { 00:25:56.881 "code": -5, 00:25:56.881 "message": "Input/output error" 00:25:56.881 } 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:56.881 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.882 request: 00:25:56.882 { 00:25:56.882 "name": "nvme0", 00:25:56.882 "trtype": "tcp", 00:25:56.882 "traddr": "10.0.0.1", 00:25:56.882 "adrfam": "ipv4", 00:25:56.882 "trsvcid": "4420", 00:25:56.882 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:56.882 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:56.882 "prchk_reftag": false, 00:25:56.882 "prchk_guard": false, 00:25:56.882 "hdgst": false, 00:25:56.882 "ddgst": false, 00:25:56.882 "dhchap_key": "key2", 00:25:56.882 "method": "bdev_nvme_attach_controller", 00:25:56.882 "req_id": 1 00:25:56.882 } 00:25:56.882 Got JSON-RPC error response 00:25:56.882 response: 00:25:56.882 { 00:25:56.882 "code": -5, 00:25:56.882 "message": "Input/output error" 00:25:56.882 } 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.882 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.142 request: 00:25:57.142 { 00:25:57.142 "name": "nvme0", 00:25:57.142 "trtype": "tcp", 00:25:57.142 "traddr": "10.0.0.1", 00:25:57.142 "adrfam": "ipv4", 00:25:57.142 "trsvcid": "4420", 00:25:57.142 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:57.142 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:57.142 "prchk_reftag": false, 00:25:57.142 "prchk_guard": false, 00:25:57.142 "hdgst": false, 00:25:57.142 "ddgst": false, 00:25:57.142 "dhchap_key": "key1", 00:25:57.142 "dhchap_ctrlr_key": "ckey2", 00:25:57.142 "method": "bdev_nvme_attach_controller", 00:25:57.142 "req_id": 1 00:25:57.142 } 00:25:57.142 Got JSON-RPC error response 00:25:57.142 response: 00:25:57.142 { 00:25:57.142 "code": -5, 00:25:57.142 "message": "Input/output error" 00:25:57.142 } 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:57.142 rmmod nvme_tcp 00:25:57.142 rmmod nvme_fabrics 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1243314 ']' 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1243314 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1243314 ']' 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1243314 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1243314 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1243314' 00:25:57.142 killing process with pid 1243314 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1243314 00:25:57.142 16:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1243314 00:25:57.403 16:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:57.403 16:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:57.403 16:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:57.403 16:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:57.403 16:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:57.403 16:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.403 16:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:57.403 16:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.309 16:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:59.567 16:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:59.568 16:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:59.568 16:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:59.568 16:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:59.568 16:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:59.568 16:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:59.568 16:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:59.568 16:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:59.568 16:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:59.568 16:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:59.568 16:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:59.568 16:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:00.504 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:00.504 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:00.504 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:00.504 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:00.504 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:00.504 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:00.504 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:00.504 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:00.786 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:00.786 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:00.786 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:00.786 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:00.786 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:00.786 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:00.786 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:00.786 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:01.730 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:01.730 16:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.M8c /tmp/spdk.key-null.U0Y /tmp/spdk.key-sha256.ZUC /tmp/spdk.key-sha384.SBe /tmp/spdk.key-sha512.d7t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:01.730 16:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:02.667 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:02.667 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:02.667 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:02.667 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:02.668 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:02.668 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:02.668 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:02.668 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:02.668 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:02.668 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:02.668 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:02.668 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:02.668 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:02.668 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:02.668 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:02.668 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:02.668 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:02.926 00:26:02.926 real 0m50.031s 00:26:02.926 user 0m48.171s 00:26:02.926 sys 0m5.735s 00:26:02.926 16:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:02.926 16:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.926 ************************************ 00:26:02.926 END TEST nvmf_auth_host 00:26:02.926 ************************************ 00:26:02.926 16:07:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:02.926 16:07:29 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:02.926 16:07:29 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:02.926 16:07:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:02.926 16:07:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.926 16:07:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:02.926 ************************************ 00:26:02.926 START TEST nvmf_digest 00:26:02.926 ************************************ 00:26:02.926 16:07:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:02.926 * Looking for test storage... 00:26:02.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:02.926 16:07:29 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.926 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:02.926 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.926 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.927 16:07:29 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:03.186 16:07:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:05.088 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:05.088 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:05.089 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:05.089 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:05.089 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:05.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:26:05.089 00:26:05.089 --- 10.0.0.2 ping statistics --- 00:26:05.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.089 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:26:05.089 00:26:05.089 --- 10.0.0.1 ping statistics --- 00:26:05.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.089 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:05.089 ************************************ 00:26:05.089 START TEST nvmf_digest_clean 00:26:05.089 ************************************ 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1252821 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1252821 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1252821 ']' 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:05.089 16:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:05.089 [2024-07-15 16:07:32.000249] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:05.089 [2024-07-15 16:07:32.000344] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.347 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.347 [2024-07-15 16:07:32.070083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.347 [2024-07-15 16:07:32.189699] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.347 [2024-07-15 16:07:32.189763] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.347 [2024-07-15 16:07:32.189779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.347 [2024-07-15 16:07:32.189793] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.347 [2024-07-15 16:07:32.189804] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.347 [2024-07-15 16:07:32.189845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.347 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:05.347 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:05.347 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:05.347 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:05.347 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:05.347 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.347 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:05.347 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:05.347 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:05.347 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.347 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 null0 00:26:05.605 [2024-07-15 16:07:32.366046] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.605 [2024-07-15 16:07:32.390265] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1252852 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1252852 /var/tmp/bperf.sock 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1252852 ']' 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:05.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:05.605 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 [2024-07-15 16:07:32.438835] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:05.605 [2024-07-15 16:07:32.438927] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252852 ] 00:26:05.605 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.605 [2024-07-15 16:07:32.501037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.863 [2024-07-15 16:07:32.617414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.863 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:05.864 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:05.864 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:05.864 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:05.864 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:06.122 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.122 16:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.691 nvme0n1 00:26:06.691 16:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:06.691 16:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:06.691 Running I/O for 2 seconds... 00:26:09.221 00:26:09.221 Latency(us) 00:26:09.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.221 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:09.221 nvme0n1 : 2.00 18909.78 73.87 0.00 0.00 6759.71 3835.07 16311.18 00:26:09.221 =================================================================================================================== 00:26:09.221 Total : 18909.78 73.87 0.00 0.00 6759.71 3835.07 16311.18 00:26:09.221 0 00:26:09.221 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:09.221 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:09.221 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:09.221 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:09.222 | select(.opcode=="crc32c") 00:26:09.222 | "\(.module_name) \(.executed)"' 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1252852 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1252852 ']' 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1252852 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1252852 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1252852' 00:26:09.222 killing process with pid 1252852 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1252852 00:26:09.222 Received shutdown signal, test time was about 2.000000 seconds 00:26:09.222 00:26:09.222 Latency(us) 00:26:09.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.222 =================================================================================================================== 00:26:09.222 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.222 16:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1252852 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1253256 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1253256 /var/tmp/bperf.sock 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1253256 ']' 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.222 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:09.481 [2024-07-15 16:07:36.157614] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:09.481 [2024-07-15 16:07:36.157707] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253256 ] 00:26:09.481 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.481 Zero copy mechanism will not be used. 00:26:09.481 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.481 [2024-07-15 16:07:36.220008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.481 [2024-07-15 16:07:36.341905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.481 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:09.481 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:09.481 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:09.481 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:09.481 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:10.050 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.050 16:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.309 nvme0n1 00:26:10.309 16:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:10.309 16:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.309 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.309 Zero copy mechanism will not be used. 00:26:10.309 Running I/O for 2 seconds... 00:26:12.843 00:26:12.843 Latency(us) 00:26:12.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.843 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:12.843 nvme0n1 : 2.00 3151.05 393.88 0.00 0.00 5073.20 4708.88 12379.02 00:26:12.843 =================================================================================================================== 00:26:12.843 Total : 3151.05 393.88 0.00 0.00 5073.20 4708.88 12379.02 00:26:12.843 0 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:12.843 | select(.opcode=="crc32c") 00:26:12.843 | "\(.module_name) \(.executed)"' 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1253256 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1253256 ']' 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1253256 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1253256 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1253256' 00:26:12.843 killing process with pid 1253256 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1253256 00:26:12.843 Received shutdown signal, test time was about 2.000000 seconds 00:26:12.843 00:26:12.843 Latency(us) 00:26:12.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.843 =================================================================================================================== 00:26:12.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.843 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1253256 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1253696 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1253696 /var/tmp/bperf.sock 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1253696 ']' 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.100 16:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.100 [2024-07-15 16:07:39.855018] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:13.100 [2024-07-15 16:07:39.855097] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253696 ] 00:26:13.100 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.100 [2024-07-15 16:07:39.918611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.356 [2024-07-15 16:07:40.041305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.356 16:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:13.356 16:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:13.356 16:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:13.356 16:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:13.356 16:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:13.613 16:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:13.613 16:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.182 nvme0n1 00:26:14.182 16:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:14.182 16:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.182 Running I/O for 2 seconds... 00:26:16.719 00:26:16.719 Latency(us) 00:26:16.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.719 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:16.719 nvme0n1 : 2.01 20328.14 79.41 0.00 0.00 6286.76 3495.25 14175.19 00:26:16.719 =================================================================================================================== 00:26:16.719 Total : 20328.14 79.41 0.00 0.00 6286.76 3495.25 14175.19 00:26:16.719 0 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:16.719 | select(.opcode=="crc32c") 00:26:16.719 | "\(.module_name) \(.executed)"' 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1253696 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1253696 ']' 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1253696 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1253696 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1253696' 00:26:16.719 killing process with pid 1253696 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1253696 00:26:16.719 Received shutdown signal, test time was about 2.000000 seconds 00:26:16.719 00:26:16.719 Latency(us) 00:26:16.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.719 =================================================================================================================== 00:26:16.719 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.719 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1253696 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1254192 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1254192 /var/tmp/bperf.sock 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1254192 ']' 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:16.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:16.981 [2024-07-15 16:07:43.700238] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:16.981 [2024-07-15 16:07:43.700319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254192 ] 00:26:16.981 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:16.981 Zero copy mechanism will not be used. 00:26:16.981 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.981 [2024-07-15 16:07:43.761327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.981 [2024-07-15 16:07:43.876622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:16.981 16:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:17.556 16:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.556 16:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.814 nvme0n1 00:26:17.814 16:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:17.814 16:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.073 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.073 Zero copy mechanism will not be used. 00:26:18.073 Running I/O for 2 seconds... 00:26:19.980 00:26:19.980 Latency(us) 00:26:19.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.980 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:19.980 nvme0n1 : 2.01 2269.43 283.68 0.00 0.00 7033.43 5437.06 13495.56 00:26:19.980 =================================================================================================================== 00:26:19.980 Total : 2269.43 283.68 0.00 0.00 7033.43 5437.06 13495.56 00:26:19.980 0 00:26:19.980 16:07:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:19.980 16:07:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:19.980 16:07:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:19.980 16:07:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:19.980 | select(.opcode=="crc32c") 00:26:19.980 | "\(.module_name) \(.executed)"' 00:26:19.980 16:07:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:20.239 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:20.239 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:20.239 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:20.239 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:20.239 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1254192 00:26:20.239 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1254192 ']' 00:26:20.239 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1254192 00:26:20.239 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:20.239 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:20.239 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1254192 00:26:20.239 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:20.239 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:20.240 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1254192' 00:26:20.240 killing process with pid 1254192 00:26:20.240 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1254192 00:26:20.240 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.240 00:26:20.240 Latency(us) 00:26:20.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.240 =================================================================================================================== 00:26:20.240 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.240 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1254192 00:26:20.807 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1252821 00:26:20.807 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1252821 ']' 00:26:20.807 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1252821 00:26:20.807 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:20.807 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:20.807 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1252821 00:26:20.807 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:20.807 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:20.807 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1252821' 00:26:20.807 killing process with pid 1252821 00:26:20.807 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1252821 00:26:20.807 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1252821 00:26:21.065 00:26:21.065 real 0m15.806s 00:26:21.065 user 0m31.811s 00:26:21.065 sys 0m3.888s 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.065 ************************************ 00:26:21.065 END TEST nvmf_digest_clean 00:26:21.065 ************************************ 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:21.065 ************************************ 00:26:21.065 START TEST nvmf_digest_error 00:26:21.065 ************************************ 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1254705 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1254705 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1254705 ']' 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:21.065 16:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.065 [2024-07-15 16:07:47.852661] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:21.065 [2024-07-15 16:07:47.852756] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.065 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.065 [2024-07-15 16:07:47.916942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.323 [2024-07-15 16:07:48.023519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.323 [2024-07-15 16:07:48.023571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.323 [2024-07-15 16:07:48.023592] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.323 [2024-07-15 16:07:48.023602] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.323 [2024-07-15 16:07:48.023611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.323 [2024-07-15 16:07:48.023637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.323 [2024-07-15 16:07:48.084164] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.323 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.323 null0 00:26:21.323 [2024-07-15 16:07:48.201795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.324 [2024-07-15 16:07:48.226025] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1254775 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1254775 /var/tmp/bperf.sock 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1254775 ']' 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:21.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:21.324 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.581 [2024-07-15 16:07:48.275802] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:21.581 [2024-07-15 16:07:48.275914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254775 ] 00:26:21.581 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.581 [2024-07-15 16:07:48.333205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.581 [2024-07-15 16:07:48.441484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.839 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.839 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:21.839 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:21.839 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:22.097 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:22.097 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.097 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.097 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.097 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.097 16:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.355 nvme0n1 00:26:22.355 16:07:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:22.355 16:07:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.355 16:07:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.355 16:07:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.355 16:07:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:22.355 16:07:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:22.615 Running I/O for 2 seconds... 00:26:22.615 [2024-07-15 16:07:49.383824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.615 [2024-07-15 16:07:49.383910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-07-15 16:07:49.383937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.615 [2024-07-15 16:07:49.400297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.615 [2024-07-15 16:07:49.400335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-07-15 16:07:49.400355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.615 [2024-07-15 16:07:49.416466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.615 [2024-07-15 16:07:49.416503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-07-15 16:07:49.416523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.615 [2024-07-15 16:07:49.428903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.615 [2024-07-15 16:07:49.428950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-07-15 16:07:49.428980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.615 [2024-07-15 16:07:49.444685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.615 [2024-07-15 16:07:49.444720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-07-15 16:07:49.444739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.615 [2024-07-15 16:07:49.456496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.615 [2024-07-15 16:07:49.456531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-07-15 16:07:49.456550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.615 [2024-07-15 16:07:49.472660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.615 [2024-07-15 16:07:49.472694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-07-15 16:07:49.472714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.615 [2024-07-15 16:07:49.484938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.615 [2024-07-15 16:07:49.484970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-07-15 16:07:49.484987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.615 [2024-07-15 16:07:49.497374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.615 [2024-07-15 16:07:49.497410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-07-15 16:07:49.497429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.615 [2024-07-15 16:07:49.510471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.615 [2024-07-15 16:07:49.510506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-07-15 16:07:49.510526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.615 [2024-07-15 16:07:49.523716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.615 [2024-07-15 16:07:49.523754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-07-15 16:07:49.523772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.615 [2024-07-15 16:07:49.538126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.615 [2024-07-15 16:07:49.538157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-07-15 16:07:49.538174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.875 [2024-07-15 16:07:49.552851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.875 [2024-07-15 16:07:49.552905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.875 [2024-07-15 16:07:49.552952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.875 [2024-07-15 16:07:49.564870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.875 [2024-07-15 16:07:49.564928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.875 [2024-07-15 16:07:49.564945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.875 [2024-07-15 16:07:49.578746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.875 [2024-07-15 16:07:49.578781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.875 [2024-07-15 16:07:49.578800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.875 [2024-07-15 16:07:49.593941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.875 [2024-07-15 16:07:49.593970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.875 [2024-07-15 16:07:49.593986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.875 [2024-07-15 16:07:49.605143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.875 [2024-07-15 16:07:49.605191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.875 [2024-07-15 16:07:49.605210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.875 [2024-07-15 16:07:49.618778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.875 [2024-07-15 16:07:49.618813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.875 [2024-07-15 16:07:49.618832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.875 [2024-07-15 16:07:49.633932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.875 [2024-07-15 16:07:49.633963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.875 [2024-07-15 16:07:49.633980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.875 [2024-07-15 16:07:49.646821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.875 [2024-07-15 16:07:49.646856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.875 [2024-07-15 16:07:49.646884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.875 [2024-07-15 16:07:49.660369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.875 [2024-07-15 16:07:49.660404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.875 [2024-07-15 16:07:49.660423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.876 [2024-07-15 16:07:49.672298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.876 [2024-07-15 16:07:49.672333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.876 [2024-07-15 16:07:49.672353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.876 [2024-07-15 16:07:49.688330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.876 [2024-07-15 16:07:49.688366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.876 [2024-07-15 16:07:49.688385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.876 [2024-07-15 16:07:49.702065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.876 [2024-07-15 16:07:49.702112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.876 [2024-07-15 16:07:49.702130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.876 [2024-07-15 16:07:49.715592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.876 [2024-07-15 16:07:49.715627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.876 [2024-07-15 16:07:49.715646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.876 [2024-07-15 16:07:49.727500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.876 [2024-07-15 16:07:49.727536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.876 [2024-07-15 16:07:49.727556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.876 [2024-07-15 16:07:49.742821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.876 [2024-07-15 16:07:49.742857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.876 [2024-07-15 16:07:49.742886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.876 [2024-07-15 16:07:49.757426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.876 [2024-07-15 16:07:49.757473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.876 [2024-07-15 16:07:49.757491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.876 [2024-07-15 16:07:49.770156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.876 [2024-07-15 16:07:49.770203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.876 [2024-07-15 16:07:49.770221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.876 [2024-07-15 16:07:49.782313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.876 [2024-07-15 16:07:49.782342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.876 [2024-07-15 16:07:49.782363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.876 [2024-07-15 16:07:49.795781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:22.876 [2024-07-15 16:07:49.795827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.876 [2024-07-15 16:07:49.795844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.807348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.807380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.807397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.822316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.822346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.822362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.835874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.835928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.835946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.848363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.848408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.848424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.860689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.860721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.860738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.871955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.871986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.872018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.885465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.885493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.885509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.896766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.896802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.896820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.911265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.911295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.911311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.925466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.925498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.925516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.937076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.937106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.937122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.950180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.950209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.950226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.963249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.963278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.963293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.976291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.976320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.976336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:49.992478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:49.992509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:49.992524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:50.007245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:50.007293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:50.007318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:50.020968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:50.021003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:50.021022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:50.034095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:50.034130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:50.034148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:50.045434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:50.045469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:50.045486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.134 [2024-07-15 16:07:50.058306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.134 [2024-07-15 16:07:50.058339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.134 [2024-07-15 16:07:50.058356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.394 [2024-07-15 16:07:50.072053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.072085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.072102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.085561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.085592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.085609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.097614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.097646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.097664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.112250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.112282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.112299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.123199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.123240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.123259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.135714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.135759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.135776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.149639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.149670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.149687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.162863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.162905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.162923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.174891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.174929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.174946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.188896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.188934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.188950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.200194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.200225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.200242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.214275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.214303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.214318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.225183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.225227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.225243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.238787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.238831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.238847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.254687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.254716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.254732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.265080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.265109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.265125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.279786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.279816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.279834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.293810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.293839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.293869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.305564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.305592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.305607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.395 [2024-07-15 16:07:50.317949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.395 [2024-07-15 16:07:50.317979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.395 [2024-07-15 16:07:50.317996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.656 [2024-07-15 16:07:50.330937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.656 [2024-07-15 16:07:50.330968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.656 [2024-07-15 16:07:50.330986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.656 [2024-07-15 16:07:50.344043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.656 [2024-07-15 16:07:50.344073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.656 [2024-07-15 16:07:50.344098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.656 [2024-07-15 16:07:50.356982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.656 [2024-07-15 16:07:50.357010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.656 [2024-07-15 16:07:50.357028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.656 [2024-07-15 16:07:50.370341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.656 [2024-07-15 16:07:50.370372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.656 [2024-07-15 16:07:50.370389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.656 [2024-07-15 16:07:50.383175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.656 [2024-07-15 16:07:50.383205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.656 [2024-07-15 16:07:50.383222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.656 [2024-07-15 16:07:50.394098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.656 [2024-07-15 16:07:50.394130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.656 [2024-07-15 16:07:50.394147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.656 [2024-07-15 16:07:50.407479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.656 [2024-07-15 16:07:50.407511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.656 [2024-07-15 16:07:50.407529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.656 [2024-07-15 16:07:50.420888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.656 [2024-07-15 16:07:50.420919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.656 [2024-07-15 16:07:50.420936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.657 [2024-07-15 16:07:50.432076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.657 [2024-07-15 16:07:50.432107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.657 [2024-07-15 16:07:50.432124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.657 [2024-07-15 16:07:50.445381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.657 [2024-07-15 16:07:50.445410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.657 [2024-07-15 16:07:50.445442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.657 [2024-07-15 16:07:50.457851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.657 [2024-07-15 16:07:50.457903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.657 [2024-07-15 16:07:50.457922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.657 [2024-07-15 16:07:50.470757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.657 [2024-07-15 16:07:50.470789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.657 [2024-07-15 16:07:50.470806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.657 [2024-07-15 16:07:50.484886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.657 [2024-07-15 16:07:50.484916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.657 [2024-07-15 16:07:50.484932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.657 [2024-07-15 16:07:50.497046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.657 [2024-07-15 16:07:50.497076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.657 [2024-07-15 16:07:50.497093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.657 [2024-07-15 16:07:50.511719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.657 [2024-07-15 16:07:50.511747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.657 [2024-07-15 16:07:50.511762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.657 [2024-07-15 16:07:50.522414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.657 [2024-07-15 16:07:50.522442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.657 [2024-07-15 16:07:50.522458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.657 [2024-07-15 16:07:50.537319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.657 [2024-07-15 16:07:50.537351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.657 [2024-07-15 16:07:50.537367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.657 [2024-07-15 16:07:50.550184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.657 [2024-07-15 16:07:50.550215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.657 [2024-07-15 16:07:50.550231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.657 [2024-07-15 16:07:50.561907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.657 [2024-07-15 16:07:50.561945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.657 [2024-07-15 16:07:50.561961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.657 [2024-07-15 16:07:50.577223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.657 [2024-07-15 16:07:50.577252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.657 [2024-07-15 16:07:50.577282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.588070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.588100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.588116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.601161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.601192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.601223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.615791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.615819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.615835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.628645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.628675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.628691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.641733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.641776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.641794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.655119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.655149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.655166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.667910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.667942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.667959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.679470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.679498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.679521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.693139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.693170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.693187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.705009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.705039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.705056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.717531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.717559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.717574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.730732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.730763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.730780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.742544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.742574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.742591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.756750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.756781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.756798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.768454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.768484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.768500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.782547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.782577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.782594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.796051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.796082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.918 [2024-07-15 16:07:50.796099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.918 [2024-07-15 16:07:50.809506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.918 [2024-07-15 16:07:50.809540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.919 [2024-07-15 16:07:50.809558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.919 [2024-07-15 16:07:50.822650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.919 [2024-07-15 16:07:50.822684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.919 [2024-07-15 16:07:50.822703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.919 [2024-07-15 16:07:50.835161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:23.919 [2024-07-15 16:07:50.835212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.919 [2024-07-15 16:07:50.835230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.179 [2024-07-15 16:07:50.849334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.179 [2024-07-15 16:07:50.849369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.179 [2024-07-15 16:07:50.849389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.179 [2024-07-15 16:07:50.862861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.179 [2024-07-15 16:07:50.862902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.179 [2024-07-15 16:07:50.862935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.179 [2024-07-15 16:07:50.879105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.179 [2024-07-15 16:07:50.879135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.179 [2024-07-15 16:07:50.879153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.179 [2024-07-15 16:07:50.890991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.179 [2024-07-15 16:07:50.891019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.179 [2024-07-15 16:07:50.891050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.179 [2024-07-15 16:07:50.905825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.179 [2024-07-15 16:07:50.905859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.179 [2024-07-15 16:07:50.905893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.179 [2024-07-15 16:07:50.919503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.179 [2024-07-15 16:07:50.919538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:50.919558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:50.934478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:50.934512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:50.934531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:50.945098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:50.945128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:50.945161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:50.960085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:50.960115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:50.960148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:50.974129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:50.974160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:50.974176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:50.988829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:50.988862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:50.988890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:51.001001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:51.001029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:51.001045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:51.015887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:51.015934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:51.015951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:51.030366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:51.030406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:51.030426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:51.042738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:51.042772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:51.042791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:51.055788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:51.055822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:51.055841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:51.071551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:51.071587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:51.071607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:51.084355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:51.084389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:51.084408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.180 [2024-07-15 16:07:51.096492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.180 [2024-07-15 16:07:51.096527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.180 [2024-07-15 16:07:51.096545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.111959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.111990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.112007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.127054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.127084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.127101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.140462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.140497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.140516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.152896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.152949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.152966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.168684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.168719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.168737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.180139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.180167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.180183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.195087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.195121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.195138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.208583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.208617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.208636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.222759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.222793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.222812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.235415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.235449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.235467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.248901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.248949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.248965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.262043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.262086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.262107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.276840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.276874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.276919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.289999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.290030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.290048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.303113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.303142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.303174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.316646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.316680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.316698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.330137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.330165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.330180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.344224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.344257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.344276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 [2024-07-15 16:07:51.357497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aebd50) 00:26:24.440 [2024-07-15 16:07:51.357530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.440 [2024-07-15 16:07:51.357548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.440 00:26:24.440 Latency(us) 00:26:24.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.441 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:24.441 nvme0n1 : 2.00 19058.28 74.45 0.00 0.00 6707.75 3373.89 18932.62 00:26:24.441 =================================================================================================================== 00:26:24.441 Total : 19058.28 74.45 0.00 0.00 6707.75 3373.89 18932.62 00:26:24.441 0 00:26:24.700 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:24.700 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:24.700 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:24.700 | .driver_specific 00:26:24.700 | .nvme_error 00:26:24.700 | .status_code 00:26:24.700 | .command_transient_transport_error' 00:26:24.700 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:24.700 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:26:24.700 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1254775 00:26:24.700 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1254775 ']' 00:26:24.960 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1254775 00:26:24.960 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:24.960 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:24.960 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1254775 00:26:24.960 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:24.960 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:24.960 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1254775' 00:26:24.960 killing process with pid 1254775 00:26:24.960 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1254775 00:26:24.960 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.960 00:26:24.960 Latency(us) 00:26:24.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.960 =================================================================================================================== 00:26:24.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.960 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1254775 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1255185 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1255185 /var/tmp/bperf.sock 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1255185 ']' 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:25.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:25.218 16:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.218 [2024-07-15 16:07:51.981408] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:25.218 [2024-07-15 16:07:51.981497] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255185 ] 00:26:25.218 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.218 Zero copy mechanism will not be used. 00:26:25.218 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.218 [2024-07-15 16:07:52.039004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.218 [2024-07-15 16:07:52.146758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.477 16:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:25.477 16:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:25.477 16:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.477 16:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.735 16:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:25.735 16:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.735 16:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.735 16:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.735 16:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.735 16:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.304 nvme0n1 00:26:26.304 16:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:26.304 16:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.304 16:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:26.304 16:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.304 16:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:26.304 16:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.304 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:26.304 Zero copy mechanism will not be used. 00:26:26.304 Running I/O for 2 seconds... 00:26:26.304 [2024-07-15 16:07:53.155248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.304 [2024-07-15 16:07:53.155309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.304 [2024-07-15 16:07:53.155332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.304 [2024-07-15 16:07:53.165243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.304 [2024-07-15 16:07:53.165275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.304 [2024-07-15 16:07:53.165293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.304 [2024-07-15 16:07:53.175129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.304 [2024-07-15 16:07:53.175160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.304 [2024-07-15 16:07:53.175177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.304 [2024-07-15 16:07:53.184716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.304 [2024-07-15 16:07:53.184750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.304 [2024-07-15 16:07:53.184768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.304 [2024-07-15 16:07:53.194252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.304 [2024-07-15 16:07:53.194282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.304 [2024-07-15 16:07:53.194298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.304 [2024-07-15 16:07:53.203971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.305 [2024-07-15 16:07:53.204001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.305 [2024-07-15 16:07:53.204018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.305 [2024-07-15 16:07:53.213646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.305 [2024-07-15 16:07:53.213675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.305 [2024-07-15 16:07:53.213691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.305 [2024-07-15 16:07:53.223388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.305 [2024-07-15 16:07:53.223423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.305 [2024-07-15 16:07:53.223442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.305 [2024-07-15 16:07:53.233576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.305 [2024-07-15 16:07:53.233609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.305 [2024-07-15 16:07:53.233627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.243502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.243532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.243549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.253192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.253222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.253244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.262809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.262841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.262860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.272458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.272487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.272503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.282186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.282216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.282232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.292129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.292159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.292175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.302077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.302107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.302123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.312103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.312132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.312148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.322126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.322172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.322192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.332038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.332067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.332083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.341952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.341982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.341998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.351899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.351932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.351964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.362027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.362057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.362074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.372031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.372061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.372078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.382156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.382204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.382223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.392241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.392276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.392294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.402190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.402224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.402242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.412163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.412196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.412215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.422266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.422302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.422328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.432310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.432345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.432363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.442473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.442507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.442526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.452560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.452593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.452611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.462427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.462461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.462480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.472381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.472414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.472433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.564 [2024-07-15 16:07:53.482375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.564 [2024-07-15 16:07:53.482409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.564 [2024-07-15 16:07:53.482428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.565 [2024-07-15 16:07:53.492300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.565 [2024-07-15 16:07:53.492346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.565 [2024-07-15 16:07:53.492365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.825 [2024-07-15 16:07:53.502334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.825 [2024-07-15 16:07:53.502365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.825 [2024-07-15 16:07:53.502381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.825 [2024-07-15 16:07:53.511899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.825 [2024-07-15 16:07:53.511950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.825 [2024-07-15 16:07:53.511970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.825 [2024-07-15 16:07:53.521713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.825 [2024-07-15 16:07:53.521743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.825 [2024-07-15 16:07:53.521759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.825 [2024-07-15 16:07:53.531634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.531663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.531679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.541946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.541978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.541996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.552468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.552502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.552520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.562811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.562845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.562863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.573173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.573207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.573225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.583691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.583726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.583746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.594417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.594450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.594470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.604885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.604918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.604951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.615149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.615179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.615210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.625706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.625739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.625758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.636069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.636099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.636116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.646139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.646169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.646185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.656159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.656206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.656225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.666138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.666183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.666198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.676650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.676686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.676705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.687135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.687183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.687209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.697302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.697351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.697370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.707434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.707478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.707497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.717686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.717720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.717739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.727818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.727852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.727872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.737999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.738028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.738044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.826 [2024-07-15 16:07:53.747965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:26.826 [2024-07-15 16:07:53.747995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.826 [2024-07-15 16:07:53.748011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.085 [2024-07-15 16:07:53.758289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.085 [2024-07-15 16:07:53.758323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.085 [2024-07-15 16:07:53.758342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.085 [2024-07-15 16:07:53.768568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.085 [2024-07-15 16:07:53.768603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.085 [2024-07-15 16:07:53.768622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.085 [2024-07-15 16:07:53.778958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.085 [2024-07-15 16:07:53.779004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.085 [2024-07-15 16:07:53.779021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.789347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.789380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.789399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.799309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.799341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.799375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.809564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.809592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.809608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.819794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.819827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.819859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.829767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.829801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.829819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.839643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.839674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.839692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.849408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.849441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.849459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.859387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.859415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.859454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.869274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.869307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.869325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.879229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.879262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.879281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.889245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.889278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.889297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.899235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.899267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.899285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.909227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.909259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.909277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.919279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.919322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.919340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.929348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.929383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.929402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.939432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.939465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.939483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.949469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.949507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.949526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.959159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.959203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.959223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.968849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.968892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.968913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.978697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.978731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.978750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.988409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.988441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.988460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:53.998282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:53.998311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:53.998327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.086 [2024-07-15 16:07:54.007944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.086 [2024-07-15 16:07:54.007972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.086 [2024-07-15 16:07:54.007988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.017565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.017595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.017612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.027309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.027339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.027355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.036996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.037024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.037040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.046602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.046635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.046654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.056591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.056624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.056642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.066723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.066752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.066785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.076685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.076733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.076751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.086819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.086853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.086871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.097098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.097127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.097143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.107335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.107368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.107386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.117397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.117428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.117453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.127542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.127590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.127608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.137599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.137630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.137649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.147673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.147720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.147739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.157691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.157736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.157755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.167659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.167691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.167710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.177620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.177653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.177671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.187544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.187579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.187598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.197790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.197824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.197843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.207982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.208011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.208027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.345 [2024-07-15 16:07:54.218089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.345 [2024-07-15 16:07:54.218116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.345 [2024-07-15 16:07:54.218133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.346 [2024-07-15 16:07:54.228193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.346 [2024-07-15 16:07:54.228240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.346 [2024-07-15 16:07:54.228259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.346 [2024-07-15 16:07:54.238240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.346 [2024-07-15 16:07:54.238271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.346 [2024-07-15 16:07:54.238289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.346 [2024-07-15 16:07:54.248337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.346 [2024-07-15 16:07:54.248383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.346 [2024-07-15 16:07:54.248401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.346 [2024-07-15 16:07:54.258419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.346 [2024-07-15 16:07:54.258451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.346 [2024-07-15 16:07:54.258469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.346 [2024-07-15 16:07:54.268464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.346 [2024-07-15 16:07:54.268497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.346 [2024-07-15 16:07:54.268516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.278658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.278690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.278708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.288727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.288756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.288796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.298859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.298901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.298935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.308953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.308982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.308999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.319118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.319146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.319177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.329236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.329269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.329287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.339259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.339290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.339309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.349288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.349321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.349339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.359209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.359237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.359252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.369090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.369120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.369136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.378991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.379027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.379045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.389035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.389064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.389081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.399152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.399185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.399203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.409432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.409464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.409490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.419384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.419416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.419435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.429368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.429401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.429420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.439569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.439603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.439622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.449514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.449547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.449565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.459455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.459488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.459506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.469453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.469500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.469518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.479398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.479431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.479449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.489213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.489242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.489258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.498934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.604 [2024-07-15 16:07:54.498963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.604 [2024-07-15 16:07:54.498979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.604 [2024-07-15 16:07:54.508743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.605 [2024-07-15 16:07:54.508775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.605 [2024-07-15 16:07:54.508793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.605 [2024-07-15 16:07:54.518414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.605 [2024-07-15 16:07:54.518442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.605 [2024-07-15 16:07:54.518458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.605 [2024-07-15 16:07:54.528142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.605 [2024-07-15 16:07:54.528171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.605 [2024-07-15 16:07:54.528186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.865 [2024-07-15 16:07:54.537944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.865 [2024-07-15 16:07:54.537974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.865 [2024-07-15 16:07:54.537991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.865 [2024-07-15 16:07:54.547368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.865 [2024-07-15 16:07:54.547402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.865 [2024-07-15 16:07:54.547419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.865 [2024-07-15 16:07:54.556977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.865 [2024-07-15 16:07:54.557005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.865 [2024-07-15 16:07:54.557021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.865 [2024-07-15 16:07:54.566554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.865 [2024-07-15 16:07:54.566597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.865 [2024-07-15 16:07:54.566614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.576357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.576386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.576403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.586143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.586173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.586190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.595964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.595995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.596011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.606120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.606150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.606181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.615991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.616021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.616038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.625846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.625897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.625917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.635436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.635465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.635481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.645305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.645333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.645349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.654723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.654757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.654775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.664592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.664621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.664637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.674714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.674747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.674766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.684885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.684915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.684933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.694748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.694779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.694796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.704901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.704932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.704948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.714720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.714753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.714891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.724519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.724562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.724578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.734425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.734454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.734474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.744739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.744770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.744786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.754134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.754163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.754179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.763756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.763784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.763801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.773332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.773365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.773384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.782854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.782896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.782916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.866 [2024-07-15 16:07:54.792568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:27.866 [2024-07-15 16:07:54.792598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.866 [2024-07-15 16:07:54.792614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.802778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.802818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.802838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.812234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.812278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.812294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.821728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.821758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.821774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.831267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.831300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.831318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.840687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.840715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.840732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.850231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.850260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.850277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.859861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.859899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.859916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.869473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.869506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.869524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.879068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.879096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.879112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.888656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.888686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.888702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.898355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.898388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.898406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.908087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.908116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.908132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.917638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.917667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.917683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.927375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.927402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.927418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.936910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.936956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.936972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.946498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.127 [2024-07-15 16:07:54.946528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.127 [2024-07-15 16:07:54.946545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.127 [2024-07-15 16:07:54.956174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.128 [2024-07-15 16:07:54.956204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.128 [2024-07-15 16:07:54.956220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.128 [2024-07-15 16:07:54.965834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.128 [2024-07-15 16:07:54.965873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.128 [2024-07-15 16:07:54.965905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.128 [2024-07-15 16:07:54.975695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.128 [2024-07-15 16:07:54.975727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.128 [2024-07-15 16:07:54.975745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.128 [2024-07-15 16:07:54.985718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.128 [2024-07-15 16:07:54.985752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.128 [2024-07-15 16:07:54.985770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.128 [2024-07-15 16:07:54.995662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.128 [2024-07-15 16:07:54.995695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.128 [2024-07-15 16:07:54.995714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.128 [2024-07-15 16:07:55.005452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.128 [2024-07-15 16:07:55.005486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.128 [2024-07-15 16:07:55.005504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.128 [2024-07-15 16:07:55.015267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.128 [2024-07-15 16:07:55.015300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.128 [2024-07-15 16:07:55.015319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.128 [2024-07-15 16:07:55.025315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.128 [2024-07-15 16:07:55.025348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.128 [2024-07-15 16:07:55.025367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.128 [2024-07-15 16:07:55.035194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.128 [2024-07-15 16:07:55.035228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.128 [2024-07-15 16:07:55.035247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.128 [2024-07-15 16:07:55.045178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.128 [2024-07-15 16:07:55.045211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.128 [2024-07-15 16:07:55.045229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.128 [2024-07-15 16:07:55.055265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.128 [2024-07-15 16:07:55.055299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.128 [2024-07-15 16:07:55.055318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.387 [2024-07-15 16:07:55.065506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.387 [2024-07-15 16:07:55.065541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.387 [2024-07-15 16:07:55.065559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.387 [2024-07-15 16:07:55.075445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.387 [2024-07-15 16:07:55.075479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.387 [2024-07-15 16:07:55.075497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.387 [2024-07-15 16:07:55.085432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.387 [2024-07-15 16:07:55.085465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.387 [2024-07-15 16:07:55.085483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.387 [2024-07-15 16:07:55.095475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.388 [2024-07-15 16:07:55.095510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.388 [2024-07-15 16:07:55.095529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.388 [2024-07-15 16:07:55.105358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.388 [2024-07-15 16:07:55.105391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.388 [2024-07-15 16:07:55.105410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.388 [2024-07-15 16:07:55.115267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.388 [2024-07-15 16:07:55.115300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.388 [2024-07-15 16:07:55.115319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.388 [2024-07-15 16:07:55.125159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.388 [2024-07-15 16:07:55.125193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.388 [2024-07-15 16:07:55.125211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.388 [2024-07-15 16:07:55.135134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.388 [2024-07-15 16:07:55.135177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.388 [2024-07-15 16:07:55.135204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.388 [2024-07-15 16:07:55.144837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22774f0) 00:26:28.388 [2024-07-15 16:07:55.144870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.388 [2024-07-15 16:07:55.144899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.388 00:26:28.388 Latency(us) 00:26:28.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.388 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:28.388 nvme0n1 : 2.00 3108.46 388.56 0.00 0.00 5140.97 4587.52 14369.37 00:26:28.388 =================================================================================================================== 00:26:28.388 Total : 3108.46 388.56 0.00 0.00 5140.97 4587.52 14369.37 00:26:28.388 0 00:26:28.388 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:28.388 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:28.388 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:28.388 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:28.388 | .driver_specific 00:26:28.388 | .nvme_error 00:26:28.388 | .status_code 00:26:28.388 | .command_transient_transport_error' 00:26:28.648 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:26:28.648 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1255185 00:26:28.648 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1255185 ']' 00:26:28.648 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1255185 00:26:28.648 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:28.648 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.648 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1255185 00:26:28.648 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:28.648 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:28.648 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1255185' 00:26:28.648 killing process with pid 1255185 00:26:28.648 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1255185 00:26:28.648 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.648 00:26:28.648 Latency(us) 00:26:28.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.648 =================================================================================================================== 00:26:28.648 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.648 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1255185 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1255600 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1255600 /var/tmp/bperf.sock 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1255600 ']' 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.908 16:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.908 [2024-07-15 16:07:55.771381] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:28.908 [2024-07-15 16:07:55.771494] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255600 ] 00:26:28.908 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.908 [2024-07-15 16:07:55.834585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.167 [2024-07-15 16:07:55.945443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.167 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:29.167 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:29.167 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.167 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.424 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:29.424 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.424 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.424 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.424 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.424 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.992 nvme0n1 00:26:29.992 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:29.992 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.992 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.992 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.992 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:29.992 16:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.992 Running I/O for 2 seconds... 00:26:29.992 [2024-07-15 16:07:56.821995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190edd58 00:26:29.992 [2024-07-15 16:07:56.823185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.992 [2024-07-15 16:07:56.823255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.992 [2024-07-15 16:07:56.834228] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fa3a0 00:26:29.992 [2024-07-15 16:07:56.835335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.992 [2024-07-15 16:07:56.835369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.992 [2024-07-15 16:07:56.847741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e3d08 00:26:29.992 [2024-07-15 16:07:56.849091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.992 [2024-07-15 16:07:56.849122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.992 [2024-07-15 16:07:56.861400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e49b0 00:26:29.992 [2024-07-15 16:07:56.862841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.992 [2024-07-15 16:07:56.862883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.992 [2024-07-15 16:07:56.875116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e7c50 00:26:29.992 [2024-07-15 16:07:56.876753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.992 [2024-07-15 16:07:56.876787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.992 [2024-07-15 16:07:56.887116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fa3a0 00:26:29.992 [2024-07-15 16:07:56.888369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.992 [2024-07-15 16:07:56.888403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.992 [2024-07-15 16:07:56.899807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f92c0 00:26:29.992 [2024-07-15 16:07:56.900940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.992 [2024-07-15 16:07:56.900970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.992 [2024-07-15 16:07:56.912579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fe720 00:26:29.992 [2024-07-15 16:07:56.913690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.992 [2024-07-15 16:07:56.913723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:56.925470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fd640 00:26:30.252 [2024-07-15 16:07:56.926609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:56.926652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:56.938600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ebb98 00:26:30.252 [2024-07-15 16:07:56.939511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:56.939544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:56.953305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f4b08 00:26:30.252 [2024-07-15 16:07:56.955249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:56.955283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:56.966745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f3a28 00:26:30.252 [2024-07-15 16:07:56.968862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:56.968903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:56.975749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e6b70 00:26:30.252 [2024-07-15 16:07:56.976687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:56.976720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:56.987806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ed0b0 00:26:30.252 [2024-07-15 16:07:56.988737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:56.988770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:57.001312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190eaab8 00:26:30.252 [2024-07-15 16:07:57.002438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:57.002472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:57.015633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e7818 00:26:30.252 [2024-07-15 16:07:57.016944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:57.016973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:57.028740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f35f0 00:26:30.252 [2024-07-15 16:07:57.030261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:57.030294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:57.041531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f1ca0 00:26:30.252 [2024-07-15 16:07:57.043092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:57.043122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:57.053434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e4de8 00:26:30.252 [2024-07-15 16:07:57.054855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:57.054899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:57.065351] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e01f8 00:26:30.252 [2024-07-15 16:07:57.066395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:57.066429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:57.078363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f9f68 00:26:30.252 [2024-07-15 16:07:57.079119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:57.079150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:57.091428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e9168 00:26:30.252 [2024-07-15 16:07:57.092515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:57.092549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:57.104473] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190efae0 00:26:30.252 [2024-07-15 16:07:57.105496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:57.105530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:57.117886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fd208 00:26:30.252 [2024-07-15 16:07:57.118991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.252 [2024-07-15 16:07:57.119021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.252 [2024-07-15 16:07:57.130953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f5be8 00:26:30.252 [2024-07-15 16:07:57.132390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.253 [2024-07-15 16:07:57.132423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.253 [2024-07-15 16:07:57.143665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190de8a8 00:26:30.253 [2024-07-15 16:07:57.145145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.253 [2024-07-15 16:07:57.145175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.253 [2024-07-15 16:07:57.155281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190edd58 00:26:30.253 [2024-07-15 16:07:57.157400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.253 [2024-07-15 16:07:57.157433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.253 [2024-07-15 16:07:57.167299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e73e0 00:26:30.253 [2024-07-15 16:07:57.168240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.253 [2024-07-15 16:07:57.168273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.253 [2024-07-15 16:07:57.180465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f1868 00:26:30.253 [2024-07-15 16:07:57.181608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.253 [2024-07-15 16:07:57.181642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.511 [2024-07-15 16:07:57.192565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e6b70 00:26:30.511 [2024-07-15 16:07:57.193655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.511 [2024-07-15 16:07:57.193688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.511 [2024-07-15 16:07:57.206740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190de038 00:26:30.511 [2024-07-15 16:07:57.208081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.511 [2024-07-15 16:07:57.208113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.511 [2024-07-15 16:07:57.219445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e1b48 00:26:30.511 [2024-07-15 16:07:57.220707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.511 [2024-07-15 16:07:57.220741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.511 [2024-07-15 16:07:57.232122] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e0630 00:26:30.511 [2024-07-15 16:07:57.233411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.511 [2024-07-15 16:07:57.233444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.511 [2024-07-15 16:07:57.244883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e2c28 00:26:30.511 [2024-07-15 16:07:57.246247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.246280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.257544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e5a90 00:26:30.512 [2024-07-15 16:07:57.258831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.258871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.270269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ed0b0 00:26:30.512 [2024-07-15 16:07:57.271627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.271673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.283075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ebfd0 00:26:30.512 [2024-07-15 16:07:57.284346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.284378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.295773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e4578 00:26:30.512 [2024-07-15 16:07:57.297189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.297223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.308553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f6458 00:26:30.512 [2024-07-15 16:07:57.309839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.309873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.321345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e88f8 00:26:30.512 [2024-07-15 16:07:57.322621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.322655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.335635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e95a0 00:26:30.512 [2024-07-15 16:07:57.337546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.337579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.347567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e1710 00:26:30.512 [2024-07-15 16:07:57.349076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.349105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.358982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ebb98 00:26:30.512 [2024-07-15 16:07:57.361088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.361118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.371016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f3a28 00:26:30.512 [2024-07-15 16:07:57.371993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.372023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.383796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190eee38 00:26:30.512 [2024-07-15 16:07:57.384740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.384773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.395663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190de038 00:26:30.512 [2024-07-15 16:07:57.396577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.396609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.409925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f57b0 00:26:30.512 [2024-07-15 16:07:57.411076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.411106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.422664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e8088 00:26:30.512 [2024-07-15 16:07:57.423771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.423804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.512 [2024-07-15 16:07:57.435424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f6cc8 00:26:30.512 [2024-07-15 16:07:57.436518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.512 [2024-07-15 16:07:57.436551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.771 [2024-07-15 16:07:57.448123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190df118 00:26:30.771 [2024-07-15 16:07:57.449250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.771 [2024-07-15 16:07:57.449284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.771 [2024-07-15 16:07:57.461295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ef270 00:26:30.771 [2024-07-15 16:07:57.462561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.771 [2024-07-15 16:07:57.462596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.771 [2024-07-15 16:07:57.474297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f3e60 00:26:30.771 [2024-07-15 16:07:57.475598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.771 [2024-07-15 16:07:57.475632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.771 [2024-07-15 16:07:57.486948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f81e0 00:26:30.771 [2024-07-15 16:07:57.488232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.771 [2024-07-15 16:07:57.488265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.771 [2024-07-15 16:07:57.499615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fb8b8 00:26:30.771 [2024-07-15 16:07:57.500886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.771 [2024-07-15 16:07:57.500935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.771 [2024-07-15 16:07:57.512375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fa7d8 00:26:30.771 [2024-07-15 16:07:57.513645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.771 [2024-07-15 16:07:57.513679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.771 [2024-07-15 16:07:57.525129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ea680 00:26:30.771 [2024-07-15 16:07:57.526401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.771 [2024-07-15 16:07:57.526434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.771 [2024-07-15 16:07:57.536949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e0ea0 00:26:30.771 [2024-07-15 16:07:57.538254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.771 [2024-07-15 16:07:57.538286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.771 [2024-07-15 16:07:57.551195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ec408 00:26:30.771 [2024-07-15 16:07:57.552621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.771 [2024-07-15 16:07:57.552654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.771 [2024-07-15 16:07:57.564322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e6fa8 00:26:30.771 [2024-07-15 16:07:57.565946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.771 [2024-07-15 16:07:57.565976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.771 [2024-07-15 16:07:57.576419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e3060 00:26:30.772 [2024-07-15 16:07:57.578072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.772 [2024-07-15 16:07:57.578102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.772 [2024-07-15 16:07:57.588311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f7da8 00:26:30.772 [2024-07-15 16:07:57.589389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.772 [2024-07-15 16:07:57.589428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.772 [2024-07-15 16:07:57.600749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f3a28 00:26:30.772 [2024-07-15 16:07:57.601847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.772 [2024-07-15 16:07:57.601891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.772 [2024-07-15 16:07:57.615145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ee190 00:26:30.772 [2024-07-15 16:07:57.616936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.772 [2024-07-15 16:07:57.616966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.772 [2024-07-15 16:07:57.627051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e4de8 00:26:30.772 [2024-07-15 16:07:57.628299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.772 [2024-07-15 16:07:57.628333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.772 [2024-07-15 16:07:57.639933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e9168 00:26:30.772 [2024-07-15 16:07:57.641080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.772 [2024-07-15 16:07:57.641110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.772 [2024-07-15 16:07:57.652964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e1b48 00:26:30.772 [2024-07-15 16:07:57.654410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.772 [2024-07-15 16:07:57.654443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.772 [2024-07-15 16:07:57.665736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f20d8 00:26:30.772 [2024-07-15 16:07:57.667175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.772 [2024-07-15 16:07:57.667219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.772 [2024-07-15 16:07:57.680064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ddc00 00:26:30.772 [2024-07-15 16:07:57.682198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.772 [2024-07-15 16:07:57.682231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.772 [2024-07-15 16:07:57.689088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190de8a8 00:26:30.772 [2024-07-15 16:07:57.690076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.772 [2024-07-15 16:07:57.690104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.772 [2024-07-15 16:07:57.702090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e99d8 00:26:31.032 [2024-07-15 16:07:57.703078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.703109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.713751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ff3c8 00:26:31.032 [2024-07-15 16:07:57.714677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.714710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.726774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e3498 00:26:31.032 [2024-07-15 16:07:57.727890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.727936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.739317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f20d8 00:26:31.032 [2024-07-15 16:07:57.740450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.740479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.752426] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fc560 00:26:31.032 [2024-07-15 16:07:57.753866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.753908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.764243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fb480 00:26:31.032 [2024-07-15 16:07:57.765652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.765682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.775996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190feb58 00:26:31.032 [2024-07-15 16:07:57.777403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.777434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.787760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fa3a0 00:26:31.032 [2024-07-15 16:07:57.789112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.789142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.799573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f8a50 00:26:31.032 [2024-07-15 16:07:57.801029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.801059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.811436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f6458 00:26:31.032 [2024-07-15 16:07:57.812841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.812871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.822362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f2510 00:26:31.032 [2024-07-15 16:07:57.823666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.823694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.833241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f9b30 00:26:31.032 [2024-07-15 16:07:57.834162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.834192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.844942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190df550 00:26:31.032 [2024-07-15 16:07:57.845843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.845873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.857007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e6fa8 00:26:31.032 [2024-07-15 16:07:57.857719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.857750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.869188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f0788 00:26:31.032 [2024-07-15 16:07:57.870083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.870113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.881582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e0630 00:26:31.032 [2024-07-15 16:07:57.882655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.882686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.893511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f6458 00:26:31.032 [2024-07-15 16:07:57.894906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.894935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:31.032 [2024-07-15 16:07:57.905374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f4b08 00:26:31.032 [2024-07-15 16:07:57.906691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.032 [2024-07-15 16:07:57.906726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:31.033 [2024-07-15 16:07:57.916208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ec840 00:26:31.033 [2024-07-15 16:07:57.917959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.033 [2024-07-15 16:07:57.917989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.033 [2024-07-15 16:07:57.926237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e23b8 00:26:31.033 [2024-07-15 16:07:57.927050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.033 [2024-07-15 16:07:57.927080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:31.033 [2024-07-15 16:07:57.938357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190df118 00:26:31.033 [2024-07-15 16:07:57.939319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.033 [2024-07-15 16:07:57.939348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:31.033 [2024-07-15 16:07:57.951311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e84c0 00:26:31.033 [2024-07-15 16:07:57.952484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.033 [2024-07-15 16:07:57.952514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:57.963452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fac10 00:26:31.293 [2024-07-15 16:07:57.964739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:57.964768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:57.974570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f2948 00:26:31.293 [2024-07-15 16:07:57.975889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:57.975918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:57.985386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190de038 00:26:31.293 [2024-07-15 16:07:57.986250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:57.986294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:57.997098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f2510 00:26:31.293 [2024-07-15 16:07:57.997980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:57.998011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.010412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e99d8 00:26:31.293 [2024-07-15 16:07:58.011807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.011837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.021251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fda78 00:26:31.293 [2024-07-15 16:07:58.022286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.022315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.032833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190df118 00:26:31.293 [2024-07-15 16:07:58.033825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.033854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.044852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f7da8 00:26:31.293 [2024-07-15 16:07:58.046017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.046047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.055886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f9b30 00:26:31.293 [2024-07-15 16:07:58.057018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.057048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.068794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fda78 00:26:31.293 [2024-07-15 16:07:58.070140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.070171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.080643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fb8b8 00:26:31.293 [2024-07-15 16:07:58.081999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.082029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.092456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fc128 00:26:31.293 [2024-07-15 16:07:58.093773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.093802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.103575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f96f8 00:26:31.293 [2024-07-15 16:07:58.104805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.104836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.114487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ec408 00:26:31.293 [2024-07-15 16:07:58.115392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.115421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.126357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e1f80 00:26:31.293 [2024-07-15 16:07:58.127082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.127112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.138244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e7818 00:26:31.293 [2024-07-15 16:07:58.139252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.139281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.150151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e0ea0 00:26:31.293 [2024-07-15 16:07:58.151215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.151242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.161976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190feb58 00:26:31.293 [2024-07-15 16:07:58.162955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.162984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.174018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fe2e8 00:26:31.293 [2024-07-15 16:07:58.174887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.174920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.186001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f2d80 00:26:31.293 [2024-07-15 16:07:58.187237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.187265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.198175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f3a28 00:26:31.293 [2024-07-15 16:07:58.199137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.199167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.210060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190df988 00:26:31.293 [2024-07-15 16:07:58.211474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.211510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.293 [2024-07-15 16:07:58.220916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e8088 00:26:31.293 [2024-07-15 16:07:58.222883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.293 [2024-07-15 16:07:58.222912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:31.554 [2024-07-15 16:07:58.231214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e0a68 00:26:31.554 [2024-07-15 16:07:58.232065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.554 [2024-07-15 16:07:58.232093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:31.554 [2024-07-15 16:07:58.244308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e1f80 00:26:31.554 [2024-07-15 16:07:58.245357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.554 [2024-07-15 16:07:58.245384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:31.554 [2024-07-15 16:07:58.256107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f4b08 00:26:31.554 [2024-07-15 16:07:58.257136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.554 [2024-07-15 16:07:58.257164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:31.554 [2024-07-15 16:07:58.268146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190eaab8 00:26:31.554 [2024-07-15 16:07:58.269345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.554 [2024-07-15 16:07:58.269373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:31.554 [2024-07-15 16:07:58.279337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ee5c8 00:26:31.554 [2024-07-15 16:07:58.280460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.554 [2024-07-15 16:07:58.280488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:31.554 [2024-07-15 16:07:58.292382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f6458 00:26:31.554 [2024-07-15 16:07:58.293718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.554 [2024-07-15 16:07:58.293745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.304475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ec840 00:26:31.555 [2024-07-15 16:07:58.305951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.305979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.314037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e5ec8 00:26:31.555 [2024-07-15 16:07:58.314942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.314971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.325918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190feb58 00:26:31.555 [2024-07-15 16:07:58.326811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.326839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.339085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190dfdc0 00:26:31.555 [2024-07-15 16:07:58.340538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.340566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.349967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ed0b0 00:26:31.555 [2024-07-15 16:07:58.350993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.351022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.361817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f1430 00:26:31.555 [2024-07-15 16:07:58.362697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.362725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.375385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f57b0 00:26:31.555 [2024-07-15 16:07:58.377117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.377147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.387573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f6458 00:26:31.555 [2024-07-15 16:07:58.389499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.389528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.395747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fd640 00:26:31.555 [2024-07-15 16:07:58.396562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.396589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.406795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f0bc0 00:26:31.555 [2024-07-15 16:07:58.407601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.407629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.419982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f0bc0 00:26:31.555 [2024-07-15 16:07:58.420984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.421012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.432258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e6fa8 00:26:31.555 [2024-07-15 16:07:58.433217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.433245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.443004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f57b0 00:26:31.555 [2024-07-15 16:07:58.444864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.444900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.453206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fb480 00:26:31.555 [2024-07-15 16:07:58.453977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.454004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.465449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f31b8 00:26:31.555 [2024-07-15 16:07:58.466452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.466479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:31.555 [2024-07-15 16:07:58.478573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e8088 00:26:31.555 [2024-07-15 16:07:58.479690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.555 [2024-07-15 16:07:58.479719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.815 [2024-07-15 16:07:58.490515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e5ec8 00:26:31.815 [2024-07-15 16:07:58.491677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.815 [2024-07-15 16:07:58.491705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.815 [2024-07-15 16:07:58.502378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ee190 00:26:31.815 [2024-07-15 16:07:58.503461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.815 [2024-07-15 16:07:58.503488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.815 [2024-07-15 16:07:58.514391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190eee38 00:26:31.815 [2024-07-15 16:07:58.515644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.515676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.526369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fa7d8 00:26:31.816 [2024-07-15 16:07:58.527656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.527683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.538417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ec840 00:26:31.816 [2024-07-15 16:07:58.539857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.539907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.549530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ddc00 00:26:31.816 [2024-07-15 16:07:58.550918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.550945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.560359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f92c0 00:26:31.816 [2024-07-15 16:07:58.561527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.561555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.573683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ed4e8 00:26:31.816 [2024-07-15 16:07:58.575196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.575224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.584519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190dfdc0 00:26:31.816 [2024-07-15 16:07:58.585669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.585696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.596427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e7c50 00:26:31.816 [2024-07-15 16:07:58.597389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.597417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.609902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f7970 00:26:31.816 [2024-07-15 16:07:58.611750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.611778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.618204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f81e0 00:26:31.816 [2024-07-15 16:07:58.619053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.619081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.630251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190df550 00:26:31.816 [2024-07-15 16:07:58.631061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.631091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.641067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e5ec8 00:26:31.816 [2024-07-15 16:07:58.641806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.641832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.653218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190de038 00:26:31.816 [2024-07-15 16:07:58.654144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.654173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.666245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fc998 00:26:31.816 [2024-07-15 16:07:58.667375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.667404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.678340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190fc560 00:26:31.816 [2024-07-15 16:07:58.679569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.679597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.690268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f92c0 00:26:31.816 [2024-07-15 16:07:58.691590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.691618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.702055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ed4e8 00:26:31.816 [2024-07-15 16:07:58.703366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.703394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.713928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e1b48 00:26:31.816 [2024-07-15 16:07:58.715323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.715355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.726625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f7100 00:26:31.816 [2024-07-15 16:07:58.728105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.728133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:31.816 [2024-07-15 16:07:58.739460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e3498 00:26:31.816 [2024-07-15 16:07:58.740948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.816 [2024-07-15 16:07:58.740992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:32.074 [2024-07-15 16:07:58.752309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190eaef0 00:26:32.074 [2024-07-15 16:07:58.753694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.074 [2024-07-15 16:07:58.753726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:32.074 [2024-07-15 16:07:58.765445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f3a28 00:26:32.075 [2024-07-15 16:07:58.766959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.075 [2024-07-15 16:07:58.766987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:32.075 [2024-07-15 16:07:58.777568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190ebfd0 00:26:32.075 [2024-07-15 16:07:58.779122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.075 [2024-07-15 16:07:58.779154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:32.075 [2024-07-15 16:07:58.790961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190f92c0 00:26:32.075 [2024-07-15 16:07:58.792605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.075 [2024-07-15 16:07:58.792636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:32.075 [2024-07-15 16:07:58.802884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d96710) with pdu=0x2000190e8088 00:26:32.075 [2024-07-15 16:07:58.804071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.075 [2024-07-15 16:07:58.804103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:32.075 00:26:32.075 Latency(us) 00:26:32.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.075 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:32.075 nvme0n1 : 2.01 20790.70 81.21 0.00 0.00 6145.68 2451.53 18252.99 00:26:32.075 =================================================================================================================== 00:26:32.075 Total : 20790.70 81.21 0.00 0.00 6145.68 2451.53 18252.99 00:26:32.075 0 00:26:32.075 16:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:32.075 16:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:32.075 16:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:32.075 16:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:32.075 | .driver_specific 00:26:32.075 | .nvme_error 00:26:32.075 | .status_code 00:26:32.075 | .command_transient_transport_error' 00:26:32.332 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:26:32.332 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1255600 00:26:32.332 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1255600 ']' 00:26:32.332 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1255600 00:26:32.332 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:32.332 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:32.332 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1255600 00:26:32.332 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:32.332 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:32.332 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1255600' 00:26:32.332 killing process with pid 1255600 00:26:32.332 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1255600 00:26:32.332 Received shutdown signal, test time was about 2.000000 seconds 00:26:32.332 00:26:32.332 Latency(us) 00:26:32.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.332 =================================================================================================================== 00:26:32.333 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.333 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1255600 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1256117 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1256117 /var/tmp/bperf.sock 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1256117 ']' 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.592 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.592 [2024-07-15 16:07:59.415940] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:32.592 [2024-07-15 16:07:59.416036] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256117 ] 00:26:32.592 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:32.592 Zero copy mechanism will not be used. 00:26:32.592 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.592 [2024-07-15 16:07:59.484581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.850 [2024-07-15 16:07:59.606750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.850 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:32.850 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:32.850 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.850 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:33.108 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:33.108 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.108 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.108 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.108 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.108 16:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.676 nvme0n1 00:26:33.676 16:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:33.676 16:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.676 16:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.676 16:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.676 16:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:33.676 16:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:33.676 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:33.676 Zero copy mechanism will not be used. 00:26:33.676 Running I/O for 2 seconds... 00:26:33.676 [2024-07-15 16:08:00.484497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.676 [2024-07-15 16:08:00.484914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.676 [2024-07-15 16:08:00.484964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.676 [2024-07-15 16:08:00.500245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.676 [2024-07-15 16:08:00.500657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.676 [2024-07-15 16:08:00.500703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.676 [2024-07-15 16:08:00.519694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.676 [2024-07-15 16:08:00.520085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.676 [2024-07-15 16:08:00.520139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.676 [2024-07-15 16:08:00.539789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.676 [2024-07-15 16:08:00.540296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.676 [2024-07-15 16:08:00.540325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.676 [2024-07-15 16:08:00.559959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.676 [2024-07-15 16:08:00.560455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.676 [2024-07-15 16:08:00.560483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.676 [2024-07-15 16:08:00.578528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.676 [2024-07-15 16:08:00.578982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.676 [2024-07-15 16:08:00.579011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.676 [2024-07-15 16:08:00.599153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.676 [2024-07-15 16:08:00.599534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.676 [2024-07-15 16:08:00.599563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.936 [2024-07-15 16:08:00.619281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.936 [2024-07-15 16:08:00.619849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.936 [2024-07-15 16:08:00.619906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.936 [2024-07-15 16:08:00.639386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.936 [2024-07-15 16:08:00.639946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.936 [2024-07-15 16:08:00.639977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.936 [2024-07-15 16:08:00.660283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.936 [2024-07-15 16:08:00.660926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.936 [2024-07-15 16:08:00.660961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.936 [2024-07-15 16:08:00.679488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.936 [2024-07-15 16:08:00.680072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.936 [2024-07-15 16:08:00.680100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.936 [2024-07-15 16:08:00.700917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.936 [2024-07-15 16:08:00.701502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.936 [2024-07-15 16:08:00.701530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.936 [2024-07-15 16:08:00.719434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.936 [2024-07-15 16:08:00.719970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.937 [2024-07-15 16:08:00.720000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.937 [2024-07-15 16:08:00.738932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.937 [2024-07-15 16:08:00.739378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.937 [2024-07-15 16:08:00.739406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.937 [2024-07-15 16:08:00.758539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.937 [2024-07-15 16:08:00.759033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.937 [2024-07-15 16:08:00.759063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.937 [2024-07-15 16:08:00.778463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.937 [2024-07-15 16:08:00.779085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.937 [2024-07-15 16:08:00.779114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.937 [2024-07-15 16:08:00.796693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.937 [2024-07-15 16:08:00.797090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.937 [2024-07-15 16:08:00.797133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.937 [2024-07-15 16:08:00.810807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.937 [2024-07-15 16:08:00.811182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.937 [2024-07-15 16:08:00.811227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.937 [2024-07-15 16:08:00.824912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.937 [2024-07-15 16:08:00.825334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.937 [2024-07-15 16:08:00.825361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.937 [2024-07-15 16:08:00.838149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.937 [2024-07-15 16:08:00.838503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.937 [2024-07-15 16:08:00.838545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.937 [2024-07-15 16:08:00.851179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.937 [2024-07-15 16:08:00.851527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.937 [2024-07-15 16:08:00.851556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.937 [2024-07-15 16:08:00.865193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:33.937 [2024-07-15 16:08:00.865551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.937 [2024-07-15 16:08:00.865598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.197 [2024-07-15 16:08:00.881059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.197 [2024-07-15 16:08:00.881419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.197 [2024-07-15 16:08:00.881466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.197 [2024-07-15 16:08:00.894137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.197 [2024-07-15 16:08:00.894493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.197 [2024-07-15 16:08:00.894536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.197 [2024-07-15 16:08:00.908680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.197 [2024-07-15 16:08:00.909051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.197 [2024-07-15 16:08:00.909094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.197 [2024-07-15 16:08:00.923626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.197 [2024-07-15 16:08:00.923981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.197 [2024-07-15 16:08:00.924024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.197 [2024-07-15 16:08:00.937127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.197 [2024-07-15 16:08:00.937501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.197 [2024-07-15 16:08:00.937548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.197 [2024-07-15 16:08:00.953065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.197 [2024-07-15 16:08:00.953432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.197 [2024-07-15 16:08:00.953476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.197 [2024-07-15 16:08:00.967241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.197 [2024-07-15 16:08:00.967522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.197 [2024-07-15 16:08:00.967555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.197 [2024-07-15 16:08:00.982028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.197 [2024-07-15 16:08:00.982382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.197 [2024-07-15 16:08:00.982427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.197 [2024-07-15 16:08:00.995483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.197 [2024-07-15 16:08:00.995888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.197 [2024-07-15 16:08:00.995917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.197 [2024-07-15 16:08:01.009284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.197 [2024-07-15 16:08:01.009633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.197 [2024-07-15 16:08:01.009662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.197 [2024-07-15 16:08:01.023487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.197 [2024-07-15 16:08:01.023929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.197 [2024-07-15 16:08:01.023973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.197 [2024-07-15 16:08:01.037730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.197 [2024-07-15 16:08:01.038096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.198 [2024-07-15 16:08:01.038142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.198 [2024-07-15 16:08:01.052498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.198 [2024-07-15 16:08:01.052844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.198 [2024-07-15 16:08:01.052893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.198 [2024-07-15 16:08:01.067758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.198 [2024-07-15 16:08:01.068167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.198 [2024-07-15 16:08:01.068196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.198 [2024-07-15 16:08:01.082463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.198 [2024-07-15 16:08:01.082846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.198 [2024-07-15 16:08:01.082874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.198 [2024-07-15 16:08:01.096216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.198 [2024-07-15 16:08:01.096540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.198 [2024-07-15 16:08:01.096568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.198 [2024-07-15 16:08:01.111848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.198 [2024-07-15 16:08:01.112253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.198 [2024-07-15 16:08:01.112281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.198 [2024-07-15 16:08:01.125651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.198 [2024-07-15 16:08:01.126055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.198 [2024-07-15 16:08:01.126098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.458 [2024-07-15 16:08:01.140569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.458 [2024-07-15 16:08:01.140930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.458 [2024-07-15 16:08:01.140974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.458 [2024-07-15 16:08:01.155666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.458 [2024-07-15 16:08:01.156080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.458 [2024-07-15 16:08:01.156109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.458 [2024-07-15 16:08:01.169964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.458 [2024-07-15 16:08:01.170326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.458 [2024-07-15 16:08:01.170370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.458 [2024-07-15 16:08:01.184818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.458 [2024-07-15 16:08:01.185167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.458 [2024-07-15 16:08:01.185215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.458 [2024-07-15 16:08:01.200320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.458 [2024-07-15 16:08:01.200695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.458 [2024-07-15 16:08:01.200741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.458 [2024-07-15 16:08:01.215188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.458 [2024-07-15 16:08:01.215523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.458 [2024-07-15 16:08:01.215570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.458 [2024-07-15 16:08:01.228164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.458 [2024-07-15 16:08:01.228515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.458 [2024-07-15 16:08:01.228543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.458 [2024-07-15 16:08:01.241976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.458 [2024-07-15 16:08:01.242345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.458 [2024-07-15 16:08:01.242391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.458 [2024-07-15 16:08:01.256937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.458 [2024-07-15 16:08:01.257326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.458 [2024-07-15 16:08:01.257371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.458 [2024-07-15 16:08:01.271553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.458 [2024-07-15 16:08:01.271909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.459 [2024-07-15 16:08:01.271939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.459 [2024-07-15 16:08:01.285087] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.459 [2024-07-15 16:08:01.285483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.459 [2024-07-15 16:08:01.285531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.459 [2024-07-15 16:08:01.300091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.459 [2024-07-15 16:08:01.300456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.459 [2024-07-15 16:08:01.300501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.459 [2024-07-15 16:08:01.313444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.459 [2024-07-15 16:08:01.313932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.459 [2024-07-15 16:08:01.313961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.459 [2024-07-15 16:08:01.328478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.459 [2024-07-15 16:08:01.328812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.459 [2024-07-15 16:08:01.328855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.459 [2024-07-15 16:08:01.343145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.459 [2024-07-15 16:08:01.343513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.459 [2024-07-15 16:08:01.343556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.459 [2024-07-15 16:08:01.357965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.459 [2024-07-15 16:08:01.358361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.459 [2024-07-15 16:08:01.358389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.459 [2024-07-15 16:08:01.372193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.459 [2024-07-15 16:08:01.372326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.459 [2024-07-15 16:08:01.372355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.459 [2024-07-15 16:08:01.386786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.459 [2024-07-15 16:08:01.387169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.459 [2024-07-15 16:08:01.387198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.719 [2024-07-15 16:08:01.401494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.719 [2024-07-15 16:08:01.401929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.719 [2024-07-15 16:08:01.401957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.719 [2024-07-15 16:08:01.415908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.719 [2024-07-15 16:08:01.416296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.719 [2024-07-15 16:08:01.416343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.719 [2024-07-15 16:08:01.429167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.719 [2024-07-15 16:08:01.429517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.719 [2024-07-15 16:08:01.429562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.719 [2024-07-15 16:08:01.443417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.719 [2024-07-15 16:08:01.443790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.719 [2024-07-15 16:08:01.443833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.719 [2024-07-15 16:08:01.458680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.719 [2024-07-15 16:08:01.459090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.719 [2024-07-15 16:08:01.459133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.719 [2024-07-15 16:08:01.475405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.719 [2024-07-15 16:08:01.475779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.719 [2024-07-15 16:08:01.475808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.719 [2024-07-15 16:08:01.489561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.719 [2024-07-15 16:08:01.489928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.719 [2024-07-15 16:08:01.489972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.719 [2024-07-15 16:08:01.503825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.719 [2024-07-15 16:08:01.504205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.719 [2024-07-15 16:08:01.504234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.720 [2024-07-15 16:08:01.518221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.720 [2024-07-15 16:08:01.518578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.720 [2024-07-15 16:08:01.518607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.720 [2024-07-15 16:08:01.532435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.720 [2024-07-15 16:08:01.532633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.720 [2024-07-15 16:08:01.532661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.720 [2024-07-15 16:08:01.546733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.720 [2024-07-15 16:08:01.547058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.720 [2024-07-15 16:08:01.547087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.720 [2024-07-15 16:08:01.560326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.720 [2024-07-15 16:08:01.560676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.720 [2024-07-15 16:08:01.560719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.720 [2024-07-15 16:08:01.574241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.720 [2024-07-15 16:08:01.574606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.720 [2024-07-15 16:08:01.574648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.720 [2024-07-15 16:08:01.588309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.720 [2024-07-15 16:08:01.588664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.720 [2024-07-15 16:08:01.588700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.720 [2024-07-15 16:08:01.602254] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.720 [2024-07-15 16:08:01.602622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.720 [2024-07-15 16:08:01.602650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.720 [2024-07-15 16:08:01.616294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.720 [2024-07-15 16:08:01.616775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.720 [2024-07-15 16:08:01.616819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.720 [2024-07-15 16:08:01.631070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.720 [2024-07-15 16:08:01.631431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.720 [2024-07-15 16:08:01.631476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.720 [2024-07-15 16:08:01.646096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.720 [2024-07-15 16:08:01.646527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.720 [2024-07-15 16:08:01.646554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.980 [2024-07-15 16:08:01.660098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.980 [2024-07-15 16:08:01.660532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.980 [2024-07-15 16:08:01.660561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.980 [2024-07-15 16:08:01.674037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.980 [2024-07-15 16:08:01.674407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.980 [2024-07-15 16:08:01.674449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.980 [2024-07-15 16:08:01.688584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.980 [2024-07-15 16:08:01.688982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.980 [2024-07-15 16:08:01.689013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.980 [2024-07-15 16:08:01.704504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.980 [2024-07-15 16:08:01.704843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.980 [2024-07-15 16:08:01.704903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.980 [2024-07-15 16:08:01.719788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.980 [2024-07-15 16:08:01.720148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.980 [2024-07-15 16:08:01.720192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.980 [2024-07-15 16:08:01.733726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.980 [2024-07-15 16:08:01.734107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.981 [2024-07-15 16:08:01.734152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.981 [2024-07-15 16:08:01.746215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.981 [2024-07-15 16:08:01.746580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.981 [2024-07-15 16:08:01.746622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.981 [2024-07-15 16:08:01.759629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.981 [2024-07-15 16:08:01.760033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.981 [2024-07-15 16:08:01.760061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.981 [2024-07-15 16:08:01.773083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.981 [2024-07-15 16:08:01.773436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.981 [2024-07-15 16:08:01.773485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.981 [2024-07-15 16:08:01.786074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.981 [2024-07-15 16:08:01.786428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.981 [2024-07-15 16:08:01.786457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.981 [2024-07-15 16:08:01.800683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.981 [2024-07-15 16:08:01.801105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.981 [2024-07-15 16:08:01.801148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.981 [2024-07-15 16:08:01.814783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.981 [2024-07-15 16:08:01.815172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.981 [2024-07-15 16:08:01.815220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.981 [2024-07-15 16:08:01.832647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.981 [2024-07-15 16:08:01.833053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.981 [2024-07-15 16:08:01.833085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.981 [2024-07-15 16:08:01.851883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.981 [2024-07-15 16:08:01.852468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.981 [2024-07-15 16:08:01.852495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.981 [2024-07-15 16:08:01.872384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.981 [2024-07-15 16:08:01.872916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.981 [2024-07-15 16:08:01.872953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.981 [2024-07-15 16:08:01.892911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:34.981 [2024-07-15 16:08:01.893501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.981 [2024-07-15 16:08:01.893530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.241 [2024-07-15 16:08:01.912142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:01.912527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:01.912575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:01.932896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:01.933481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:01.933508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:01.951083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:01.951631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:01.951657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:01.971008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:01.971554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:01.971582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:01.991624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:01.992135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:01.992179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:02.009995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:02.010488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:02.010524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:02.029550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:02.030106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:02.030135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:02.048195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:02.048678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:02.048706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:02.066660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:02.067189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:02.067218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:02.086776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:02.087467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:02.087495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:02.107348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:02.107694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:02.107723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:02.127376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:02.127773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:02.127802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:02.148104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:02.148512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:02.148554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.242 [2024-07-15 16:08:02.167382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.242 [2024-07-15 16:08:02.167988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.242 [2024-07-15 16:08:02.168016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.188211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.188729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.502 [2024-07-15 16:08:02.188757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.206642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.207037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.502 [2024-07-15 16:08:02.207067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.224648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.225079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.502 [2024-07-15 16:08:02.225108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.244703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.245150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.502 [2024-07-15 16:08:02.245193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.265112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.265603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.502 [2024-07-15 16:08:02.265630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.285511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.285918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.502 [2024-07-15 16:08:02.285947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.303954] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.304417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.502 [2024-07-15 16:08:02.304446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.324846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.325435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.502 [2024-07-15 16:08:02.325463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.344565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.345003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.502 [2024-07-15 16:08:02.345056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.364898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.365451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.502 [2024-07-15 16:08:02.365479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.384956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.385433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.502 [2024-07-15 16:08:02.385461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.403755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.404148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.502 [2024-07-15 16:08:02.404193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.502 [2024-07-15 16:08:02.422287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.502 [2024-07-15 16:08:02.422664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.503 [2024-07-15 16:08:02.422708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.761 [2024-07-15 16:08:02.441045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.761 [2024-07-15 16:08:02.441484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.761 [2024-07-15 16:08:02.441512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.761 [2024-07-15 16:08:02.460936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bcbaf0) with pdu=0x2000190fef90 00:26:35.761 [2024-07-15 16:08:02.461401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.761 [2024-07-15 16:08:02.461429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.761 00:26:35.761 Latency(us) 00:26:35.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.761 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:35.761 nvme0n1 : 2.01 1882.14 235.27 0.00 0.00 8479.76 5679.79 21845.33 00:26:35.761 =================================================================================================================== 00:26:35.761 Total : 1882.14 235.27 0.00 0.00 8479.76 5679.79 21845.33 00:26:35.761 0 00:26:35.761 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:35.761 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:35.761 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:35.761 | .driver_specific 00:26:35.761 | .nvme_error 00:26:35.761 | .status_code 00:26:35.761 | .command_transient_transport_error' 00:26:35.761 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:36.021 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 121 > 0 )) 00:26:36.021 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1256117 00:26:36.021 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1256117 ']' 00:26:36.021 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1256117 00:26:36.021 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:36.021 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:36.021 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1256117 00:26:36.021 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:36.021 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:36.021 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1256117' 00:26:36.021 killing process with pid 1256117 00:26:36.021 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1256117 00:26:36.021 Received shutdown signal, test time was about 2.000000 seconds 00:26:36.021 00:26:36.021 Latency(us) 00:26:36.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.021 =================================================================================================================== 00:26:36.021 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:36.021 16:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1256117 00:26:36.282 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1254705 00:26:36.282 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1254705 ']' 00:26:36.282 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1254705 00:26:36.282 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:36.282 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:36.282 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1254705 00:26:36.282 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:36.282 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:36.282 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1254705' 00:26:36.282 killing process with pid 1254705 00:26:36.282 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1254705 00:26:36.282 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1254705 00:26:36.540 00:26:36.540 real 0m15.571s 00:26:36.540 user 0m31.547s 00:26:36.540 sys 0m3.729s 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.540 ************************************ 00:26:36.540 END TEST nvmf_digest_error 00:26:36.540 ************************************ 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.540 rmmod nvme_tcp 00:26:36.540 rmmod nvme_fabrics 00:26:36.540 rmmod nvme_keyring 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1254705 ']' 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1254705 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1254705 ']' 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1254705 00:26:36.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1254705) - No such process 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1254705 is not found' 00:26:36.540 Process with pid 1254705 is not found 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.540 16:08:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.074 16:08:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:39.074 00:26:39.074 real 0m35.710s 00:26:39.074 user 1m4.194s 00:26:39.074 sys 0m9.101s 00:26:39.074 16:08:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:39.074 16:08:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:39.074 ************************************ 00:26:39.074 END TEST nvmf_digest 00:26:39.074 ************************************ 00:26:39.074 16:08:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:39.074 16:08:05 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:39.074 16:08:05 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:39.074 16:08:05 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:39.074 16:08:05 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:39.074 16:08:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:39.074 16:08:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.074 16:08:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:39.075 ************************************ 00:26:39.075 START TEST nvmf_bdevperf 00:26:39.075 ************************************ 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:39.075 * Looking for test storage... 00:26:39.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:39.075 16:08:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.979 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:40.980 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:40.980 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:40.980 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:40.980 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:26:40.980 00:26:40.980 --- 10.0.0.2 ping statistics --- 00:26:40.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.980 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:26:40.980 00:26:40.980 --- 10.0.0.1 ping statistics --- 00:26:40.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.980 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1258463 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1258463 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1258463 ']' 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:40.980 16:08:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:40.980 [2024-07-15 16:08:07.765066] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:40.980 [2024-07-15 16:08:07.765144] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.980 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.980 [2024-07-15 16:08:07.833257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:41.238 [2024-07-15 16:08:07.952029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.238 [2024-07-15 16:08:07.952093] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.238 [2024-07-15 16:08:07.952111] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.238 [2024-07-15 16:08:07.952125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.238 [2024-07-15 16:08:07.952138] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.238 [2024-07-15 16:08:07.952504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.238 [2024-07-15 16:08:07.952526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:41.238 [2024-07-15 16:08:07.952529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:41.806 [2024-07-15 16:08:08.724268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.806 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:42.066 Malloc0 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:42.067 [2024-07-15 16:08:08.783552] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.067 { 00:26:42.067 "params": { 00:26:42.067 "name": "Nvme$subsystem", 00:26:42.067 "trtype": "$TEST_TRANSPORT", 00:26:42.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.067 "adrfam": "ipv4", 00:26:42.067 "trsvcid": "$NVMF_PORT", 00:26:42.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.067 "hdgst": ${hdgst:-false}, 00:26:42.067 "ddgst": ${ddgst:-false} 00:26:42.067 }, 00:26:42.067 "method": "bdev_nvme_attach_controller" 00:26:42.067 } 00:26:42.067 EOF 00:26:42.067 )") 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:42.067 16:08:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:42.067 "params": { 00:26:42.067 "name": "Nvme1", 00:26:42.067 "trtype": "tcp", 00:26:42.067 "traddr": "10.0.0.2", 00:26:42.067 "adrfam": "ipv4", 00:26:42.067 "trsvcid": "4420", 00:26:42.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:42.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:42.067 "hdgst": false, 00:26:42.067 "ddgst": false 00:26:42.067 }, 00:26:42.067 "method": "bdev_nvme_attach_controller" 00:26:42.067 }' 00:26:42.067 [2024-07-15 16:08:08.832747] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:42.067 [2024-07-15 16:08:08.832819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258626 ] 00:26:42.067 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.067 [2024-07-15 16:08:08.891590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.327 [2024-07-15 16:08:09.004458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.586 Running I/O for 1 seconds... 00:26:43.525 00:26:43.525 Latency(us) 00:26:43.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.525 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:43.525 Verification LBA range: start 0x0 length 0x4000 00:26:43.525 Nvme1n1 : 1.01 8597.33 33.58 0.00 0.00 14824.35 958.77 17767.54 00:26:43.525 =================================================================================================================== 00:26:43.526 Total : 8597.33 33.58 0.00 0.00 14824.35 958.77 17767.54 00:26:43.825 16:08:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1258795 00:26:43.825 16:08:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:43.825 16:08:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:43.825 16:08:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:43.825 16:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:43.825 16:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:43.825 16:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:43.825 16:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:43.825 { 00:26:43.825 "params": { 00:26:43.825 "name": "Nvme$subsystem", 00:26:43.825 "trtype": "$TEST_TRANSPORT", 00:26:43.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.825 "adrfam": "ipv4", 00:26:43.825 "trsvcid": "$NVMF_PORT", 00:26:43.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.825 "hdgst": ${hdgst:-false}, 00:26:43.825 "ddgst": ${ddgst:-false} 00:26:43.825 }, 00:26:43.825 "method": "bdev_nvme_attach_controller" 00:26:43.825 } 00:26:43.825 EOF 00:26:43.825 )") 00:26:43.825 16:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:43.825 16:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:43.825 16:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:43.825 16:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:43.825 "params": { 00:26:43.825 "name": "Nvme1", 00:26:43.825 "trtype": "tcp", 00:26:43.825 "traddr": "10.0.0.2", 00:26:43.825 "adrfam": "ipv4", 00:26:43.825 "trsvcid": "4420", 00:26:43.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:43.825 "hdgst": false, 00:26:43.825 "ddgst": false 00:26:43.825 }, 00:26:43.825 "method": "bdev_nvme_attach_controller" 00:26:43.825 }' 00:26:43.825 [2024-07-15 16:08:10.680705] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:43.825 [2024-07-15 16:08:10.680778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258795 ] 00:26:43.825 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.107 [2024-07-15 16:08:10.745542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.107 [2024-07-15 16:08:10.861551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.366 Running I/O for 15 seconds... 00:26:46.906 16:08:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1258463 00:26:46.906 16:08:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:46.906 [2024-07-15 16:08:13.647340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.906 [2024-07-15 16:08:13.647393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.906 [2024-07-15 16:08:13.647450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.906 [2024-07-15 16:08:13.647488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.906 [2024-07-15 16:08:13.647525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.906 [2024-07-15 16:08:13.647560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.906 [2024-07-15 16:08:13.647595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.906 [2024-07-15 16:08:13.647631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.906 [2024-07-15 16:08:13.647667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.647712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.647746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.647783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.647819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.647857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.647903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.647966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.647981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.647995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.906 [2024-07-15 16:08:13.648627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.906 [2024-07-15 16:08:13.648643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.648660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.648676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.648693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.907 [2024-07-15 16:08:13.648708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.648725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.907 [2024-07-15 16:08:13.648740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.648756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.907 [2024-07-15 16:08:13.648771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.648788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.907 [2024-07-15 16:08:13.648803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.648820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.907 [2024-07-15 16:08:13.648835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.648851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.907 [2024-07-15 16:08:13.648866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.648892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.907 [2024-07-15 16:08:13.648909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.648942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.907 [2024-07-15 16:08:13.648956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.648972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.648985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.907 [2024-07-15 16:08:13.649612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.649974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.649987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.650002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.650015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.650030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.907 [2024-07-15 16:08:13.650043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.907 [2024-07-15 16:08:13.650057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.650981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.650995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.908 [2024-07-15 16:08:13.651460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.908 [2024-07-15 16:08:13.651476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.909 [2024-07-15 16:08:13.651491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.909 [2024-07-15 16:08:13.651507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.909 [2024-07-15 16:08:13.651522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.909 [2024-07-15 16:08:13.651543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.909 [2024-07-15 16:08:13.651559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.909 [2024-07-15 16:08:13.651576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.909 [2024-07-15 16:08:13.651591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.909 [2024-07-15 16:08:13.651608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.909 [2024-07-15 16:08:13.651623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.909 [2024-07-15 16:08:13.651640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.909 [2024-07-15 16:08:13.651655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.909 [2024-07-15 16:08:13.651672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.909 [2024-07-15 16:08:13.651687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.909 [2024-07-15 16:08:13.651703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed74c0 is same with the state(5) to be set 00:26:46.909 [2024-07-15 16:08:13.651723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:46.909 [2024-07-15 16:08:13.651736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:46.909 [2024-07-15 16:08:13.651749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39272 len:8 PRP1 0x0 PRP2 0x0 00:26:46.909 [2024-07-15 16:08:13.651763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.909 [2024-07-15 16:08:13.651836] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xed74c0 was disconnected and freed. reset controller. 00:26:46.909 [2024-07-15 16:08:13.651938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.909 [2024-07-15 16:08:13.651960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.909 [2024-07-15 16:08:13.651975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.909 [2024-07-15 16:08:13.651988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.909 [2024-07-15 16:08:13.652001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.909 [2024-07-15 16:08:13.652013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.909 [2024-07-15 16:08:13.652026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.909 [2024-07-15 16:08:13.652039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.909 [2024-07-15 16:08:13.652051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.909 [2024-07-15 16:08:13.655852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.909 [2024-07-15 16:08:13.655905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.909 [2024-07-15 16:08:13.656938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-07-15 16:08:13.656971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.909 [2024-07-15 16:08:13.656988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.909 [2024-07-15 16:08:13.657290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.909 [2024-07-15 16:08:13.657594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.909 [2024-07-15 16:08:13.657618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.909 [2024-07-15 16:08:13.657638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.909 [2024-07-15 16:08:13.662214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.909 [2024-07-15 16:08:13.671116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.909 [2024-07-15 16:08:13.671634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-07-15 16:08:13.671665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.909 [2024-07-15 16:08:13.671683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.909 [2024-07-15 16:08:13.671993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.909 [2024-07-15 16:08:13.672294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.909 [2024-07-15 16:08:13.672317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.909 [2024-07-15 16:08:13.672332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.909 [2024-07-15 16:08:13.676901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.909 [2024-07-15 16:08:13.686071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.909 [2024-07-15 16:08:13.686559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-07-15 16:08:13.686591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.909 [2024-07-15 16:08:13.686609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.909 [2024-07-15 16:08:13.686915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.909 [2024-07-15 16:08:13.687218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.909 [2024-07-15 16:08:13.687241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.909 [2024-07-15 16:08:13.687256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.909 [2024-07-15 16:08:13.691817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.909 [2024-07-15 16:08:13.700997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.909 [2024-07-15 16:08:13.701506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-07-15 16:08:13.701536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.909 [2024-07-15 16:08:13.701559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.909 [2024-07-15 16:08:13.701856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.909 [2024-07-15 16:08:13.702165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.909 [2024-07-15 16:08:13.702189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.909 [2024-07-15 16:08:13.702204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.909 [2024-07-15 16:08:13.706765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.909 [2024-07-15 16:08:13.715918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.909 [2024-07-15 16:08:13.716467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-07-15 16:08:13.716498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.909 [2024-07-15 16:08:13.716514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.909 [2024-07-15 16:08:13.716810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.909 [2024-07-15 16:08:13.717119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.909 [2024-07-15 16:08:13.717145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.909 [2024-07-15 16:08:13.717160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.909 [2024-07-15 16:08:13.721719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.909 [2024-07-15 16:08:13.730864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.909 [2024-07-15 16:08:13.731354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-07-15 16:08:13.731384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.909 [2024-07-15 16:08:13.731401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.909 [2024-07-15 16:08:13.731697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.909 [2024-07-15 16:08:13.732010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.909 [2024-07-15 16:08:13.732034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.909 [2024-07-15 16:08:13.732049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.909 [2024-07-15 16:08:13.736613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.909 [2024-07-15 16:08:13.745888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.909 [2024-07-15 16:08:13.746369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-07-15 16:08:13.746399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.909 [2024-07-15 16:08:13.746416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.909 [2024-07-15 16:08:13.746702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.909 [2024-07-15 16:08:13.747005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.909 [2024-07-15 16:08:13.747034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.909 [2024-07-15 16:08:13.747049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.909 [2024-07-15 16:08:13.751449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.909 [2024-07-15 16:08:13.760376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.909 [2024-07-15 16:08:13.760792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-07-15 16:08:13.760819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.910 [2024-07-15 16:08:13.760834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.910 [2024-07-15 16:08:13.761138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.910 [2024-07-15 16:08:13.761410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.910 [2024-07-15 16:08:13.761430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.910 [2024-07-15 16:08:13.761442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.910 [2024-07-15 16:08:13.765302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.910 [2024-07-15 16:08:13.774598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.910 [2024-07-15 16:08:13.775066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-07-15 16:08:13.775094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.910 [2024-07-15 16:08:13.775109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.910 [2024-07-15 16:08:13.775412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.910 [2024-07-15 16:08:13.775658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.910 [2024-07-15 16:08:13.775677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.910 [2024-07-15 16:08:13.775690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.910 [2024-07-15 16:08:13.779535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.910 [2024-07-15 16:08:13.788712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.910 [2024-07-15 16:08:13.789190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-07-15 16:08:13.789231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.910 [2024-07-15 16:08:13.789245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.910 [2024-07-15 16:08:13.789541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.910 [2024-07-15 16:08:13.789787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.910 [2024-07-15 16:08:13.789806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.910 [2024-07-15 16:08:13.789819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.910 [2024-07-15 16:08:13.793668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.911 [2024-07-15 16:08:13.802938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.911 [2024-07-15 16:08:13.803520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-07-15 16:08:13.803547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.911 [2024-07-15 16:08:13.803563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.911 [2024-07-15 16:08:13.803865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.911 [2024-07-15 16:08:13.804122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.911 [2024-07-15 16:08:13.804146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.911 [2024-07-15 16:08:13.804158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.911 [2024-07-15 16:08:13.807983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.911 [2024-07-15 16:08:13.817175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.911 [2024-07-15 16:08:13.817616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-07-15 16:08:13.817656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.911 [2024-07-15 16:08:13.817671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.911 [2024-07-15 16:08:13.817993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.911 [2024-07-15 16:08:13.818271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.911 [2024-07-15 16:08:13.818306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.911 [2024-07-15 16:08:13.818319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.911 [2024-07-15 16:08:13.822136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.911 [2024-07-15 16:08:13.831437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.911 [2024-07-15 16:08:13.831902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-07-15 16:08:13.831930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:46.911 [2024-07-15 16:08:13.831945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:46.911 [2024-07-15 16:08:13.832248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:46.911 [2024-07-15 16:08:13.832517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.911 [2024-07-15 16:08:13.832537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.911 [2024-07-15 16:08:13.832549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.171 [2024-07-15 16:08:13.836478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.171 [2024-07-15 16:08:13.845754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.171 [2024-07-15 16:08:13.846249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.171 [2024-07-15 16:08:13.846277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.171 [2024-07-15 16:08:13.846293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.171 [2024-07-15 16:08:13.846604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.171 [2024-07-15 16:08:13.846851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.171 [2024-07-15 16:08:13.846892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.171 [2024-07-15 16:08:13.846906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.171 [2024-07-15 16:08:13.850852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.171 [2024-07-15 16:08:13.859998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.171 [2024-07-15 16:08:13.860489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.171 [2024-07-15 16:08:13.860530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.171 [2024-07-15 16:08:13.860547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.171 [2024-07-15 16:08:13.860849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.171 [2024-07-15 16:08:13.861148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.171 [2024-07-15 16:08:13.861169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.171 [2024-07-15 16:08:13.861182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.171 [2024-07-15 16:08:13.865014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.171 [2024-07-15 16:08:13.874213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.171 [2024-07-15 16:08:13.874716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.171 [2024-07-15 16:08:13.874758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.171 [2024-07-15 16:08:13.874774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.171 [2024-07-15 16:08:13.875064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.171 [2024-07-15 16:08:13.875349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.171 [2024-07-15 16:08:13.875369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.171 [2024-07-15 16:08:13.875381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.171 [2024-07-15 16:08:13.879203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.171 [2024-07-15 16:08:13.888295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.171 [2024-07-15 16:08:13.888858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.171 [2024-07-15 16:08:13.888905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.171 [2024-07-15 16:08:13.888922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.171 [2024-07-15 16:08:13.889227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.171 [2024-07-15 16:08:13.889490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.171 [2024-07-15 16:08:13.889509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.171 [2024-07-15 16:08:13.889526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.171 [2024-07-15 16:08:13.893338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.171 [2024-07-15 16:08:13.902524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.171 [2024-07-15 16:08:13.902995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.171 [2024-07-15 16:08:13.903023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.171 [2024-07-15 16:08:13.903039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.171 [2024-07-15 16:08:13.903341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.171 [2024-07-15 16:08:13.903587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.171 [2024-07-15 16:08:13.903606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.171 [2024-07-15 16:08:13.903618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.171 [2024-07-15 16:08:13.907675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.171 [2024-07-15 16:08:13.916874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.171 [2024-07-15 16:08:13.917439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.171 [2024-07-15 16:08:13.917482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.171 [2024-07-15 16:08:13.917498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.171 [2024-07-15 16:08:13.917803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.171 [2024-07-15 16:08:13.918099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.171 [2024-07-15 16:08:13.918121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.171 [2024-07-15 16:08:13.918134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.171 [2024-07-15 16:08:13.922210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.171 [2024-07-15 16:08:13.931273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.171 [2024-07-15 16:08:13.931744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.171 [2024-07-15 16:08:13.931772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.171 [2024-07-15 16:08:13.931787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.171 [2024-07-15 16:08:13.932072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.171 [2024-07-15 16:08:13.932363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.171 [2024-07-15 16:08:13.932383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.171 [2024-07-15 16:08:13.932396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.171 [2024-07-15 16:08:13.936510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.171 [2024-07-15 16:08:13.945408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.171 [2024-07-15 16:08:13.945934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.171 [2024-07-15 16:08:13.945967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.171 [2024-07-15 16:08:13.945983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.171 [2024-07-15 16:08:13.946277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.171 [2024-07-15 16:08:13.946523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.171 [2024-07-15 16:08:13.946542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.171 [2024-07-15 16:08:13.946553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.171 [2024-07-15 16:08:13.950365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.171 [2024-07-15 16:08:13.959650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.171 [2024-07-15 16:08:13.960144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.171 [2024-07-15 16:08:13.960173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.171 [2024-07-15 16:08:13.960188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.171 [2024-07-15 16:08:13.960490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.172 [2024-07-15 16:08:13.960736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.172 [2024-07-15 16:08:13.960756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.172 [2024-07-15 16:08:13.960768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.172 [2024-07-15 16:08:13.964612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.172 [2024-07-15 16:08:13.973832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.172 [2024-07-15 16:08:13.974363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.172 [2024-07-15 16:08:13.974404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.172 [2024-07-15 16:08:13.974420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.172 [2024-07-15 16:08:13.974720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.172 [2024-07-15 16:08:13.975013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.172 [2024-07-15 16:08:13.975034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.172 [2024-07-15 16:08:13.975048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.172 [2024-07-15 16:08:13.978843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.172 [2024-07-15 16:08:13.988015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.172 [2024-07-15 16:08:13.988535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.172 [2024-07-15 16:08:13.988562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.172 [2024-07-15 16:08:13.988577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.172 [2024-07-15 16:08:13.988858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.172 [2024-07-15 16:08:13.989162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.172 [2024-07-15 16:08:13.989184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.172 [2024-07-15 16:08:13.989197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.172 [2024-07-15 16:08:13.993035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.172 [2024-07-15 16:08:14.002258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.172 [2024-07-15 16:08:14.002732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.172 [2024-07-15 16:08:14.002760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.172 [2024-07-15 16:08:14.002775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.172 [2024-07-15 16:08:14.003066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.172 [2024-07-15 16:08:14.003335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.172 [2024-07-15 16:08:14.003355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.172 [2024-07-15 16:08:14.003367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.172 [2024-07-15 16:08:14.007192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.172 [2024-07-15 16:08:14.016412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.172 [2024-07-15 16:08:14.016863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.172 [2024-07-15 16:08:14.016897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.172 [2024-07-15 16:08:14.016927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.172 [2024-07-15 16:08:14.017208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.172 [2024-07-15 16:08:14.017470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.172 [2024-07-15 16:08:14.017489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.172 [2024-07-15 16:08:14.017501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.172 [2024-07-15 16:08:14.021313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.172 [2024-07-15 16:08:14.030523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.172 [2024-07-15 16:08:14.031124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.172 [2024-07-15 16:08:14.031152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.172 [2024-07-15 16:08:14.031167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.172 [2024-07-15 16:08:14.031450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.172 [2024-07-15 16:08:14.031696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.172 [2024-07-15 16:08:14.031716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.172 [2024-07-15 16:08:14.031728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.172 [2024-07-15 16:08:14.035511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.172 [2024-07-15 16:08:14.044752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.172 [2024-07-15 16:08:14.045229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.172 [2024-07-15 16:08:14.045257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.172 [2024-07-15 16:08:14.045273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.172 [2024-07-15 16:08:14.045576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.172 [2024-07-15 16:08:14.045823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.172 [2024-07-15 16:08:14.045842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.172 [2024-07-15 16:08:14.045854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.172 [2024-07-15 16:08:14.049705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.172 [2024-07-15 16:08:14.059031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.172 [2024-07-15 16:08:14.059476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.172 [2024-07-15 16:08:14.059518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.172 [2024-07-15 16:08:14.059534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.172 [2024-07-15 16:08:14.059822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.172 [2024-07-15 16:08:14.060111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.172 [2024-07-15 16:08:14.060134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.172 [2024-07-15 16:08:14.060147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.172 [2024-07-15 16:08:14.064150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.172 [2024-07-15 16:08:14.073187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.172 [2024-07-15 16:08:14.073639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.172 [2024-07-15 16:08:14.073667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.172 [2024-07-15 16:08:14.073683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.172 [2024-07-15 16:08:14.073980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.172 [2024-07-15 16:08:14.074248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.172 [2024-07-15 16:08:14.074268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.172 [2024-07-15 16:08:14.074280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.172 [2024-07-15 16:08:14.078115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.172 [2024-07-15 16:08:14.087295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.172 [2024-07-15 16:08:14.087764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.172 [2024-07-15 16:08:14.087792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.172 [2024-07-15 16:08:14.087813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.172 [2024-07-15 16:08:14.088099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.172 [2024-07-15 16:08:14.088365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.172 [2024-07-15 16:08:14.088384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.172 [2024-07-15 16:08:14.088396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.172 [2024-07-15 16:08:14.092248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.434 [2024-07-15 16:08:14.101929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.434 [2024-07-15 16:08:14.102412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.434 [2024-07-15 16:08:14.102455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.434 [2024-07-15 16:08:14.102471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.434 [2024-07-15 16:08:14.102775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.434 [2024-07-15 16:08:14.103051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.434 [2024-07-15 16:08:14.103072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.434 [2024-07-15 16:08:14.103084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.434 [2024-07-15 16:08:14.107029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.434 [2024-07-15 16:08:14.116011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.434 [2024-07-15 16:08:14.116521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.434 [2024-07-15 16:08:14.116562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.434 [2024-07-15 16:08:14.116578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.434 [2024-07-15 16:08:14.116904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.434 [2024-07-15 16:08:14.117158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.434 [2024-07-15 16:08:14.117193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.434 [2024-07-15 16:08:14.117205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.434 [2024-07-15 16:08:14.121017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.434 [2024-07-15 16:08:14.130270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.434 [2024-07-15 16:08:14.130701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.434 [2024-07-15 16:08:14.130741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.434 [2024-07-15 16:08:14.130755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.434 [2024-07-15 16:08:14.131072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.434 [2024-07-15 16:08:14.131356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.434 [2024-07-15 16:08:14.131381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.434 [2024-07-15 16:08:14.131394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.434 [2024-07-15 16:08:14.135214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.434 [2024-07-15 16:08:14.144468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.434 [2024-07-15 16:08:14.144944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.434 [2024-07-15 16:08:14.144972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.434 [2024-07-15 16:08:14.145002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.434 [2024-07-15 16:08:14.145287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.434 [2024-07-15 16:08:14.145533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.434 [2024-07-15 16:08:14.145552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.434 [2024-07-15 16:08:14.145565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.434 [2024-07-15 16:08:14.149412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.434 [2024-07-15 16:08:14.158654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.434 [2024-07-15 16:08:14.159138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.434 [2024-07-15 16:08:14.159167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.435 [2024-07-15 16:08:14.159182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.435 [2024-07-15 16:08:14.159485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.435 [2024-07-15 16:08:14.159731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.435 [2024-07-15 16:08:14.159750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.435 [2024-07-15 16:08:14.159762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.435 [2024-07-15 16:08:14.163953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.435 [2024-07-15 16:08:14.172951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.435 [2024-07-15 16:08:14.173406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.435 [2024-07-15 16:08:14.173447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.435 [2024-07-15 16:08:14.173463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.435 [2024-07-15 16:08:14.173725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.435 [2024-07-15 16:08:14.174000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.435 [2024-07-15 16:08:14.174021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.435 [2024-07-15 16:08:14.174034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.435 [2024-07-15 16:08:14.177856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.435 [2024-07-15 16:08:14.187109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.435 [2024-07-15 16:08:14.187728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.435 [2024-07-15 16:08:14.187756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.435 [2024-07-15 16:08:14.187772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.435 [2024-07-15 16:08:14.188061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.435 [2024-07-15 16:08:14.188330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.435 [2024-07-15 16:08:14.188349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.435 [2024-07-15 16:08:14.188362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.435 [2024-07-15 16:08:14.192180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.435 [2024-07-15 16:08:14.201163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.435 [2024-07-15 16:08:14.201653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.435 [2024-07-15 16:08:14.201694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.435 [2024-07-15 16:08:14.201710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.435 [2024-07-15 16:08:14.202020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.435 [2024-07-15 16:08:14.202286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.435 [2024-07-15 16:08:14.202306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.435 [2024-07-15 16:08:14.202319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.435 [2024-07-15 16:08:14.206131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.435 [2024-07-15 16:08:14.215327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.435 [2024-07-15 16:08:14.215882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.435 [2024-07-15 16:08:14.215910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.435 [2024-07-15 16:08:14.215925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.435 [2024-07-15 16:08:14.216220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.435 [2024-07-15 16:08:14.216466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.435 [2024-07-15 16:08:14.216484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.435 [2024-07-15 16:08:14.216497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.435 [2024-07-15 16:08:14.220306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.435 [2024-07-15 16:08:14.229460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.435 [2024-07-15 16:08:14.230005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.435 [2024-07-15 16:08:14.230046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.435 [2024-07-15 16:08:14.230062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.435 [2024-07-15 16:08:14.230373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.435 [2024-07-15 16:08:14.230619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.435 [2024-07-15 16:08:14.230638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.435 [2024-07-15 16:08:14.230650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.435 [2024-07-15 16:08:14.234498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.435 [2024-07-15 16:08:14.243671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.435 [2024-07-15 16:08:14.244120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.435 [2024-07-15 16:08:14.244148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.435 [2024-07-15 16:08:14.244164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.435 [2024-07-15 16:08:14.244449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.435 [2024-07-15 16:08:14.244716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.435 [2024-07-15 16:08:14.244735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.435 [2024-07-15 16:08:14.244747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.435 [2024-07-15 16:08:14.248593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.435 [2024-07-15 16:08:14.257820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.435 [2024-07-15 16:08:14.258333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.435 [2024-07-15 16:08:14.258361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.435 [2024-07-15 16:08:14.258377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.435 [2024-07-15 16:08:14.258679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.435 [2024-07-15 16:08:14.258953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.435 [2024-07-15 16:08:14.258974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.435 [2024-07-15 16:08:14.258986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.435 [2024-07-15 16:08:14.262812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.435 [2024-07-15 16:08:14.271924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.435 [2024-07-15 16:08:14.272366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.435 [2024-07-15 16:08:14.272393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.435 [2024-07-15 16:08:14.272423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.435 [2024-07-15 16:08:14.272713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.435 [2024-07-15 16:08:14.272988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.435 [2024-07-15 16:08:14.273009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.435 [2024-07-15 16:08:14.273027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.435 [2024-07-15 16:08:14.276828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.435 [2024-07-15 16:08:14.286056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.435 [2024-07-15 16:08:14.286605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.435 [2024-07-15 16:08:14.286632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.435 [2024-07-15 16:08:14.286647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.435 [2024-07-15 16:08:14.286963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.435 [2024-07-15 16:08:14.287239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.435 [2024-07-15 16:08:14.287259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.435 [2024-07-15 16:08:14.287272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.435 [2024-07-15 16:08:14.291082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.435 [2024-07-15 16:08:14.300355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.435 [2024-07-15 16:08:14.300888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.435 [2024-07-15 16:08:14.300915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.435 [2024-07-15 16:08:14.300945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.435 [2024-07-15 16:08:14.301226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.435 [2024-07-15 16:08:14.301472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.435 [2024-07-15 16:08:14.301491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.435 [2024-07-15 16:08:14.301503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.435 [2024-07-15 16:08:14.305317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.435 [2024-07-15 16:08:14.314564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.436 [2024-07-15 16:08:14.315057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.436 [2024-07-15 16:08:14.315085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.436 [2024-07-15 16:08:14.315101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.436 [2024-07-15 16:08:14.315388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.436 [2024-07-15 16:08:14.315634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.436 [2024-07-15 16:08:14.315653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.436 [2024-07-15 16:08:14.315665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.436 [2024-07-15 16:08:14.319521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.436 [2024-07-15 16:08:14.328743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.436 [2024-07-15 16:08:14.329194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.436 [2024-07-15 16:08:14.329226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.436 [2024-07-15 16:08:14.329242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.436 [2024-07-15 16:08:14.329528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.436 [2024-07-15 16:08:14.329774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.436 [2024-07-15 16:08:14.329793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.436 [2024-07-15 16:08:14.329805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.436 [2024-07-15 16:08:14.333606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.436 [2024-07-15 16:08:14.342834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.436 [2024-07-15 16:08:14.343317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.436 [2024-07-15 16:08:14.343358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.436 [2024-07-15 16:08:14.343375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.436 [2024-07-15 16:08:14.343676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.436 [2024-07-15 16:08:14.343964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.436 [2024-07-15 16:08:14.343985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.436 [2024-07-15 16:08:14.343999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.436 [2024-07-15 16:08:14.347844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.436 [2024-07-15 16:08:14.357081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.436 [2024-07-15 16:08:14.357655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.436 [2024-07-15 16:08:14.357696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.436 [2024-07-15 16:08:14.357712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.436 [2024-07-15 16:08:14.358024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.436 [2024-07-15 16:08:14.358299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.436 [2024-07-15 16:08:14.358319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.436 [2024-07-15 16:08:14.358332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.436 [2024-07-15 16:08:14.362384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.698 [2024-07-15 16:08:14.371567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.698 [2024-07-15 16:08:14.372067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.698 [2024-07-15 16:08:14.372096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.698 [2024-07-15 16:08:14.372112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.698 [2024-07-15 16:08:14.372415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.698 [2024-07-15 16:08:14.372667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.698 [2024-07-15 16:08:14.372686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.698 [2024-07-15 16:08:14.372698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.698 [2024-07-15 16:08:14.376595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.698 [2024-07-15 16:08:14.385749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.698 [2024-07-15 16:08:14.386213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.698 [2024-07-15 16:08:14.386255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.698 [2024-07-15 16:08:14.386271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.698 [2024-07-15 16:08:14.386573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.698 [2024-07-15 16:08:14.386819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.698 [2024-07-15 16:08:14.386838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.698 [2024-07-15 16:08:14.386850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.698 [2024-07-15 16:08:14.390813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.698 [2024-07-15 16:08:14.399923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.698 [2024-07-15 16:08:14.400363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.698 [2024-07-15 16:08:14.400403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.698 [2024-07-15 16:08:14.400419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.698 [2024-07-15 16:08:14.400724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.698 [2024-07-15 16:08:14.401016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.698 [2024-07-15 16:08:14.401037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.698 [2024-07-15 16:08:14.401050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.699 [2024-07-15 16:08:14.404849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.699 [2024-07-15 16:08:14.414043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.699 [2024-07-15 16:08:14.414484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.699 [2024-07-15 16:08:14.414510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.699 [2024-07-15 16:08:14.414525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.699 [2024-07-15 16:08:14.414807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.699 [2024-07-15 16:08:14.415105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.699 [2024-07-15 16:08:14.415126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.699 [2024-07-15 16:08:14.415139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.699 [2024-07-15 16:08:14.419150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.699 [2024-07-15 16:08:14.428428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.699 [2024-07-15 16:08:14.428948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.699 [2024-07-15 16:08:14.428976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.699 [2024-07-15 16:08:14.428992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.699 [2024-07-15 16:08:14.429282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.699 [2024-07-15 16:08:14.429529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.699 [2024-07-15 16:08:14.429547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.699 [2024-07-15 16:08:14.429560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.699 [2024-07-15 16:08:14.433399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.699 [2024-07-15 16:08:14.442605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.699 [2024-07-15 16:08:14.443054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.699 [2024-07-15 16:08:14.443082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.699 [2024-07-15 16:08:14.443098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.699 [2024-07-15 16:08:14.443404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.699 [2024-07-15 16:08:14.443650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.699 [2024-07-15 16:08:14.443668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.699 [2024-07-15 16:08:14.443680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.699 [2024-07-15 16:08:14.447545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.699 [2024-07-15 16:08:14.456742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.699 [2024-07-15 16:08:14.457199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.699 [2024-07-15 16:08:14.457239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.699 [2024-07-15 16:08:14.457253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.699 [2024-07-15 16:08:14.457531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.699 [2024-07-15 16:08:14.457776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.699 [2024-07-15 16:08:14.457795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.699 [2024-07-15 16:08:14.457807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.699 [2024-07-15 16:08:14.461636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.699 [2024-07-15 16:08:14.470773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.699 [2024-07-15 16:08:14.471234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.699 [2024-07-15 16:08:14.471262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.699 [2024-07-15 16:08:14.471283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.699 [2024-07-15 16:08:14.471588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.699 [2024-07-15 16:08:14.471834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.699 [2024-07-15 16:08:14.471853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.699 [2024-07-15 16:08:14.471866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.699 [2024-07-15 16:08:14.475733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.699 [2024-07-15 16:08:14.484965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.699 [2024-07-15 16:08:14.485449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.699 [2024-07-15 16:08:14.485490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.699 [2024-07-15 16:08:14.485507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.699 [2024-07-15 16:08:14.485807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.699 [2024-07-15 16:08:14.486103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.699 [2024-07-15 16:08:14.486125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.699 [2024-07-15 16:08:14.486138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.699 [2024-07-15 16:08:14.489963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.699 [2024-07-15 16:08:14.499191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.699 [2024-07-15 16:08:14.499721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.699 [2024-07-15 16:08:14.499749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.699 [2024-07-15 16:08:14.499764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.699 [2024-07-15 16:08:14.500050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.699 [2024-07-15 16:08:14.500334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.699 [2024-07-15 16:08:14.500353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.699 [2024-07-15 16:08:14.500365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.699 [2024-07-15 16:08:14.504176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.699 [2024-07-15 16:08:14.513265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.699 [2024-07-15 16:08:14.513684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.699 [2024-07-15 16:08:14.513711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.699 [2024-07-15 16:08:14.513726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.699 [2024-07-15 16:08:14.514014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.699 [2024-07-15 16:08:14.514307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.699 [2024-07-15 16:08:14.514330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.699 [2024-07-15 16:08:14.514343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.699 [2024-07-15 16:08:14.518138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.699 [2024-07-15 16:08:14.527496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.699 [2024-07-15 16:08:14.528025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.699 [2024-07-15 16:08:14.528068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.699 [2024-07-15 16:08:14.528084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.699 [2024-07-15 16:08:14.528382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.699 [2024-07-15 16:08:14.528628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.699 [2024-07-15 16:08:14.528647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.699 [2024-07-15 16:08:14.528659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.699 [2024-07-15 16:08:14.532499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.699 [2024-07-15 16:08:14.541632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.699 [2024-07-15 16:08:14.542097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.699 [2024-07-15 16:08:14.542125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.699 [2024-07-15 16:08:14.542140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.699 [2024-07-15 16:08:14.542421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.699 [2024-07-15 16:08:14.542667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.699 [2024-07-15 16:08:14.542685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.699 [2024-07-15 16:08:14.542698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.699 [2024-07-15 16:08:14.546590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.699 [2024-07-15 16:08:14.555732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.699 [2024-07-15 16:08:14.556260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.699 [2024-07-15 16:08:14.556288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.699 [2024-07-15 16:08:14.556303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.699 [2024-07-15 16:08:14.556605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.699 [2024-07-15 16:08:14.556851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.700 [2024-07-15 16:08:14.556870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.700 [2024-07-15 16:08:14.556906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.700 [2024-07-15 16:08:14.560718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.700 [2024-07-15 16:08:14.569893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.700 [2024-07-15 16:08:14.570347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.700 [2024-07-15 16:08:14.570386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.700 [2024-07-15 16:08:14.570401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.700 [2024-07-15 16:08:14.570664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.700 [2024-07-15 16:08:14.570954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.700 [2024-07-15 16:08:14.570976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.700 [2024-07-15 16:08:14.570989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.700 [2024-07-15 16:08:14.574786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.700 [2024-07-15 16:08:14.584135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.700 [2024-07-15 16:08:14.584657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.700 [2024-07-15 16:08:14.584684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.700 [2024-07-15 16:08:14.584715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.700 [2024-07-15 16:08:14.585041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.700 [2024-07-15 16:08:14.585311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.700 [2024-07-15 16:08:14.585331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.700 [2024-07-15 16:08:14.585343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.700 [2024-07-15 16:08:14.589155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.700 [2024-07-15 16:08:14.598327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.700 [2024-07-15 16:08:14.598819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.700 [2024-07-15 16:08:14.598850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.700 [2024-07-15 16:08:14.598867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.700 [2024-07-15 16:08:14.599174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.700 [2024-07-15 16:08:14.599475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.700 [2024-07-15 16:08:14.599498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.700 [2024-07-15 16:08:14.599513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.700 [2024-07-15 16:08:14.604080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.700 [2024-07-15 16:08:14.613232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.700 [2024-07-15 16:08:14.613846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.700 [2024-07-15 16:08:14.613908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.700 [2024-07-15 16:08:14.613925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.700 [2024-07-15 16:08:14.614227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.700 [2024-07-15 16:08:14.614527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.700 [2024-07-15 16:08:14.614550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.700 [2024-07-15 16:08:14.614565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.700 [2024-07-15 16:08:14.619132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.700 [2024-07-15 16:08:14.628187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.962 [2024-07-15 16:08:14.628692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.962 [2024-07-15 16:08:14.628734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.962 [2024-07-15 16:08:14.628751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.962 [2024-07-15 16:08:14.629086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.962 [2024-07-15 16:08:14.629388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.962 [2024-07-15 16:08:14.629408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.962 [2024-07-15 16:08:14.629420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.962 [2024-07-15 16:08:14.633998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.962 [2024-07-15 16:08:14.643155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.962 [2024-07-15 16:08:14.643721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.962 [2024-07-15 16:08:14.643748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.962 [2024-07-15 16:08:14.643763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.962 [2024-07-15 16:08:14.644087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.962 [2024-07-15 16:08:14.644388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.962 [2024-07-15 16:08:14.644411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.962 [2024-07-15 16:08:14.644426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.962 [2024-07-15 16:08:14.648994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.962 [2024-07-15 16:08:14.658146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.962 [2024-07-15 16:08:14.658648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.962 [2024-07-15 16:08:14.658675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.962 [2024-07-15 16:08:14.658705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.962 [2024-07-15 16:08:14.659029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.962 [2024-07-15 16:08:14.659330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.962 [2024-07-15 16:08:14.659353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.962 [2024-07-15 16:08:14.659374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.962 [2024-07-15 16:08:14.663945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.962 [2024-07-15 16:08:14.673323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.962 [2024-07-15 16:08:14.673874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.962 [2024-07-15 16:08:14.673914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.962 [2024-07-15 16:08:14.673931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.962 [2024-07-15 16:08:14.674227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.962 [2024-07-15 16:08:14.674527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.962 [2024-07-15 16:08:14.674551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.962 [2024-07-15 16:08:14.674565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.962 [2024-07-15 16:08:14.679141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.962 [2024-07-15 16:08:14.688305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.962 [2024-07-15 16:08:14.688783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.962 [2024-07-15 16:08:14.688813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.962 [2024-07-15 16:08:14.688830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.962 [2024-07-15 16:08:14.689138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.962 [2024-07-15 16:08:14.689438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.962 [2024-07-15 16:08:14.689461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.962 [2024-07-15 16:08:14.689476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.962 [2024-07-15 16:08:14.694053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.962 [2024-07-15 16:08:14.703207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.962 [2024-07-15 16:08:14.703739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.962 [2024-07-15 16:08:14.703779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.962 [2024-07-15 16:08:14.703794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.962 [2024-07-15 16:08:14.704105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.962 [2024-07-15 16:08:14.704407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.962 [2024-07-15 16:08:14.704430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.962 [2024-07-15 16:08:14.704445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.962 [2024-07-15 16:08:14.709014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.962 [2024-07-15 16:08:14.718173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.962 [2024-07-15 16:08:14.718679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.962 [2024-07-15 16:08:14.718714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.962 [2024-07-15 16:08:14.718733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.962 [2024-07-15 16:08:14.719039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.962 [2024-07-15 16:08:14.719339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.962 [2024-07-15 16:08:14.719362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.962 [2024-07-15 16:08:14.719377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.962 [2024-07-15 16:08:14.723949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.962 [2024-07-15 16:08:14.733098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.962 [2024-07-15 16:08:14.733647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.962 [2024-07-15 16:08:14.733677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.962 [2024-07-15 16:08:14.733694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.962 [2024-07-15 16:08:14.734003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.962 [2024-07-15 16:08:14.734303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.962 [2024-07-15 16:08:14.734327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.962 [2024-07-15 16:08:14.734342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.962 [2024-07-15 16:08:14.738909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.962 [2024-07-15 16:08:14.748056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.962 [2024-07-15 16:08:14.748554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.962 [2024-07-15 16:08:14.748584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.962 [2024-07-15 16:08:14.748601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.962 [2024-07-15 16:08:14.748907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.962 [2024-07-15 16:08:14.749208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.962 [2024-07-15 16:08:14.749231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.962 [2024-07-15 16:08:14.749246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.962 [2024-07-15 16:08:14.753804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.962 [2024-07-15 16:08:14.762957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.962 [2024-07-15 16:08:14.763461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.962 [2024-07-15 16:08:14.763501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.962 [2024-07-15 16:08:14.763516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.962 [2024-07-15 16:08:14.763830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.963 [2024-07-15 16:08:14.764147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.963 [2024-07-15 16:08:14.764172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.963 [2024-07-15 16:08:14.764187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.963 [2024-07-15 16:08:14.768744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.963 [2024-07-15 16:08:14.777914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.963 [2024-07-15 16:08:14.778422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.963 [2024-07-15 16:08:14.778453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.963 [2024-07-15 16:08:14.778470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.963 [2024-07-15 16:08:14.778766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.963 [2024-07-15 16:08:14.779078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.963 [2024-07-15 16:08:14.779103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.963 [2024-07-15 16:08:14.779118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.963 [2024-07-15 16:08:14.783676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.963 [2024-07-15 16:08:14.792887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.963 [2024-07-15 16:08:14.793393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.963 [2024-07-15 16:08:14.793433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.963 [2024-07-15 16:08:14.793449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.963 [2024-07-15 16:08:14.793748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.963 [2024-07-15 16:08:14.794061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.963 [2024-07-15 16:08:14.794086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.963 [2024-07-15 16:08:14.794101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.963 [2024-07-15 16:08:14.798661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.963 [2024-07-15 16:08:14.807807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.963 [2024-07-15 16:08:14.808288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.963 [2024-07-15 16:08:14.808319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.963 [2024-07-15 16:08:14.808337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.963 [2024-07-15 16:08:14.808633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.963 [2024-07-15 16:08:14.808959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.963 [2024-07-15 16:08:14.808983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.963 [2024-07-15 16:08:14.808998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.963 [2024-07-15 16:08:14.813570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.963 [2024-07-15 16:08:14.822738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.963 [2024-07-15 16:08:14.823377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.963 [2024-07-15 16:08:14.823436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.963 [2024-07-15 16:08:14.823453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.963 [2024-07-15 16:08:14.823749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.963 [2024-07-15 16:08:14.824060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.963 [2024-07-15 16:08:14.824084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.963 [2024-07-15 16:08:14.824098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.963 [2024-07-15 16:08:14.828664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.963 [2024-07-15 16:08:14.837823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.963 [2024-07-15 16:08:14.838430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.963 [2024-07-15 16:08:14.838482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.963 [2024-07-15 16:08:14.838499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.963 [2024-07-15 16:08:14.838795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.963 [2024-07-15 16:08:14.839105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.963 [2024-07-15 16:08:14.839129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.963 [2024-07-15 16:08:14.839144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.963 [2024-07-15 16:08:14.843713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.963 [2024-07-15 16:08:14.852883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.963 [2024-07-15 16:08:14.853380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.963 [2024-07-15 16:08:14.853410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.963 [2024-07-15 16:08:14.853427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.963 [2024-07-15 16:08:14.853723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.963 [2024-07-15 16:08:14.854035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.963 [2024-07-15 16:08:14.854059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.963 [2024-07-15 16:08:14.854073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.963 [2024-07-15 16:08:14.858637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.963 [2024-07-15 16:08:14.867789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.963 [2024-07-15 16:08:14.868270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.963 [2024-07-15 16:08:14.868301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.963 [2024-07-15 16:08:14.868324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.963 [2024-07-15 16:08:14.868620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.963 [2024-07-15 16:08:14.868934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.963 [2024-07-15 16:08:14.868958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.963 [2024-07-15 16:08:14.868972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.963 [2024-07-15 16:08:14.873529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.963 [2024-07-15 16:08:14.882683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.963 [2024-07-15 16:08:14.883166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.963 [2024-07-15 16:08:14.883197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:47.963 [2024-07-15 16:08:14.883214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:47.963 [2024-07-15 16:08:14.883511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:47.963 [2024-07-15 16:08:14.883811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.963 [2024-07-15 16:08:14.883834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.963 [2024-07-15 16:08:14.883849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.963 [2024-07-15 16:08:14.888395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.225 [2024-07-15 16:08:14.897511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.225 [2024-07-15 16:08:14.898026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.225 [2024-07-15 16:08:14.898054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.225 [2024-07-15 16:08:14.898070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.225 [2024-07-15 16:08:14.898390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.225 [2024-07-15 16:08:14.898691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.225 [2024-07-15 16:08:14.898714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.225 [2024-07-15 16:08:14.898729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.225 [2024-07-15 16:08:14.903296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.225 [2024-07-15 16:08:14.912445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.225 [2024-07-15 16:08:14.912947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.225 [2024-07-15 16:08:14.912979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.225 [2024-07-15 16:08:14.912996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.225 [2024-07-15 16:08:14.913292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.225 [2024-07-15 16:08:14.913592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.225 [2024-07-15 16:08:14.913621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.225 [2024-07-15 16:08:14.913636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.225 [2024-07-15 16:08:14.918206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.225 [2024-07-15 16:08:14.927355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.225 [2024-07-15 16:08:14.927942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.225 [2024-07-15 16:08:14.927973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.225 [2024-07-15 16:08:14.927990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.225 [2024-07-15 16:08:14.928286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.225 [2024-07-15 16:08:14.928587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.225 [2024-07-15 16:08:14.928610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.225 [2024-07-15 16:08:14.928625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.225 [2024-07-15 16:08:14.933204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.225 [2024-07-15 16:08:14.942376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.225 [2024-07-15 16:08:14.942863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.225 [2024-07-15 16:08:14.942903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.225 [2024-07-15 16:08:14.942921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.225 [2024-07-15 16:08:14.943218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.225 [2024-07-15 16:08:14.943518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.225 [2024-07-15 16:08:14.943542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.225 [2024-07-15 16:08:14.943556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.225 [2024-07-15 16:08:14.948122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.225 [2024-07-15 16:08:14.957279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.225 [2024-07-15 16:08:14.957811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.225 [2024-07-15 16:08:14.957837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.225 [2024-07-15 16:08:14.957867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.225 [2024-07-15 16:08:14.958189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.225 [2024-07-15 16:08:14.958491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.225 [2024-07-15 16:08:14.958514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.225 [2024-07-15 16:08:14.958528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.225 [2024-07-15 16:08:14.963098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.225 [2024-07-15 16:08:14.972247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.225 [2024-07-15 16:08:14.972757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.225 [2024-07-15 16:08:14.972783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.225 [2024-07-15 16:08:14.972813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.225 [2024-07-15 16:08:14.973145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.225 [2024-07-15 16:08:14.973446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.225 [2024-07-15 16:08:14.973470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.225 [2024-07-15 16:08:14.973485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.225 [2024-07-15 16:08:14.978053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.225 [2024-07-15 16:08:14.987197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.225 [2024-07-15 16:08:14.987702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.225 [2024-07-15 16:08:14.987742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.225 [2024-07-15 16:08:14.987758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.225 [2024-07-15 16:08:14.988080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.225 [2024-07-15 16:08:14.988382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.225 [2024-07-15 16:08:14.988405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.225 [2024-07-15 16:08:14.988420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.225 [2024-07-15 16:08:14.993049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.225 [2024-07-15 16:08:15.002204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.225 [2024-07-15 16:08:15.002691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.225 [2024-07-15 16:08:15.002718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.226 [2024-07-15 16:08:15.002733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.226 [2024-07-15 16:08:15.003055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.226 [2024-07-15 16:08:15.003357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.226 [2024-07-15 16:08:15.003380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.226 [2024-07-15 16:08:15.003395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.226 [2024-07-15 16:08:15.007961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.226 [2024-07-15 16:08:15.017107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.226 [2024-07-15 16:08:15.017606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.226 [2024-07-15 16:08:15.017636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.226 [2024-07-15 16:08:15.017653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.226 [2024-07-15 16:08:15.017966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.226 [2024-07-15 16:08:15.018267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.226 [2024-07-15 16:08:15.018290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.226 [2024-07-15 16:08:15.018305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.226 [2024-07-15 16:08:15.022861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.226 [2024-07-15 16:08:15.032019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.226 [2024-07-15 16:08:15.032518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.226 [2024-07-15 16:08:15.032545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.226 [2024-07-15 16:08:15.032575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.226 [2024-07-15 16:08:15.032889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.226 [2024-07-15 16:08:15.033191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.226 [2024-07-15 16:08:15.033215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.226 [2024-07-15 16:08:15.033229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.226 [2024-07-15 16:08:15.037789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.226 [2024-07-15 16:08:15.046947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.226 [2024-07-15 16:08:15.047415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.226 [2024-07-15 16:08:15.047445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.226 [2024-07-15 16:08:15.047462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.226 [2024-07-15 16:08:15.047758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.226 [2024-07-15 16:08:15.048071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.226 [2024-07-15 16:08:15.048095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.226 [2024-07-15 16:08:15.048110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.226 [2024-07-15 16:08:15.052667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.226 [2024-07-15 16:08:15.061881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.226 [2024-07-15 16:08:15.062360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.226 [2024-07-15 16:08:15.062401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.226 [2024-07-15 16:08:15.062416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.226 [2024-07-15 16:08:15.062715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.226 [2024-07-15 16:08:15.063027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.226 [2024-07-15 16:08:15.063051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.226 [2024-07-15 16:08:15.063072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.226 [2024-07-15 16:08:15.067630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.226 [2024-07-15 16:08:15.076790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.226 [2024-07-15 16:08:15.077349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.226 [2024-07-15 16:08:15.077380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.226 [2024-07-15 16:08:15.077398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.226 [2024-07-15 16:08:15.077694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.226 [2024-07-15 16:08:15.078005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.226 [2024-07-15 16:08:15.078029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.226 [2024-07-15 16:08:15.078044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.226 [2024-07-15 16:08:15.082600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.226 [2024-07-15 16:08:15.091762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.226 [2024-07-15 16:08:15.092246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.226 [2024-07-15 16:08:15.092277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.226 [2024-07-15 16:08:15.092294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.226 [2024-07-15 16:08:15.092589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.226 [2024-07-15 16:08:15.092902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.226 [2024-07-15 16:08:15.092926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.226 [2024-07-15 16:08:15.092941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.226 [2024-07-15 16:08:15.097503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.226 [2024-07-15 16:08:15.106671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.226 [2024-07-15 16:08:15.107197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.226 [2024-07-15 16:08:15.107239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.226 [2024-07-15 16:08:15.107254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.226 [2024-07-15 16:08:15.107589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.226 [2024-07-15 16:08:15.107899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.226 [2024-07-15 16:08:15.107923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.226 [2024-07-15 16:08:15.107938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.226 [2024-07-15 16:08:15.112500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.226 [2024-07-15 16:08:15.121649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.226 [2024-07-15 16:08:15.122152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.226 [2024-07-15 16:08:15.122189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.226 [2024-07-15 16:08:15.122207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.226 [2024-07-15 16:08:15.122503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.226 [2024-07-15 16:08:15.122803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.226 [2024-07-15 16:08:15.122826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.226 [2024-07-15 16:08:15.122841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.226 [2024-07-15 16:08:15.127411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.226 [2024-07-15 16:08:15.136557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.226 [2024-07-15 16:08:15.137037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.226 [2024-07-15 16:08:15.137068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.226 [2024-07-15 16:08:15.137086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.226 [2024-07-15 16:08:15.137381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.226 [2024-07-15 16:08:15.137682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.226 [2024-07-15 16:08:15.137704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.226 [2024-07-15 16:08:15.137719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.226 [2024-07-15 16:08:15.142291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.226 [2024-07-15 16:08:15.151406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.226 [2024-07-15 16:08:15.151919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.226 [2024-07-15 16:08:15.151946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.226 [2024-07-15 16:08:15.151962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.226 [2024-07-15 16:08:15.152270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.226 [2024-07-15 16:08:15.152569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.226 [2024-07-15 16:08:15.152592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.226 [2024-07-15 16:08:15.152607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.486 [2024-07-15 16:08:15.157145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.486 [2024-07-15 16:08:15.166294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.486 [2024-07-15 16:08:15.166809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.487 [2024-07-15 16:08:15.166835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.487 [2024-07-15 16:08:15.166850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.487 [2024-07-15 16:08:15.167186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.487 [2024-07-15 16:08:15.167494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.487 [2024-07-15 16:08:15.167517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.487 [2024-07-15 16:08:15.167532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.487 [2024-07-15 16:08:15.172101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.487 [2024-07-15 16:08:15.181250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.487 [2024-07-15 16:08:15.181815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.487 [2024-07-15 16:08:15.181841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.487 [2024-07-15 16:08:15.181872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.487 [2024-07-15 16:08:15.182188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.487 [2024-07-15 16:08:15.182488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.487 [2024-07-15 16:08:15.182511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.487 [2024-07-15 16:08:15.182526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.487 [2024-07-15 16:08:15.187111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.487 [2024-07-15 16:08:15.196255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.487 [2024-07-15 16:08:15.196846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.487 [2024-07-15 16:08:15.196907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.487 [2024-07-15 16:08:15.196939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.487 [2024-07-15 16:08:15.197203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.487 [2024-07-15 16:08:15.197513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.487 [2024-07-15 16:08:15.197536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.487 [2024-07-15 16:08:15.197551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.487 [2024-07-15 16:08:15.202141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.487 [2024-07-15 16:08:15.211284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.487 [2024-07-15 16:08:15.211863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.487 [2024-07-15 16:08:15.211950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.487 [2024-07-15 16:08:15.211967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.487 [2024-07-15 16:08:15.212281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.487 [2024-07-15 16:08:15.212583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.487 [2024-07-15 16:08:15.212607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.487 [2024-07-15 16:08:15.212622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.487 [2024-07-15 16:08:15.217184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.487 [2024-07-15 16:08:15.226325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.487 [2024-07-15 16:08:15.226800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.487 [2024-07-15 16:08:15.226831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.487 [2024-07-15 16:08:15.226848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.487 [2024-07-15 16:08:15.227156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.487 [2024-07-15 16:08:15.227458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.487 [2024-07-15 16:08:15.227481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.487 [2024-07-15 16:08:15.227496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.487 [2024-07-15 16:08:15.232070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.487 [2024-07-15 16:08:15.241242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.487 [2024-07-15 16:08:15.241733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.487 [2024-07-15 16:08:15.241775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.487 [2024-07-15 16:08:15.241791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.487 [2024-07-15 16:08:15.242132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.487 [2024-07-15 16:08:15.242434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.487 [2024-07-15 16:08:15.242458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.487 [2024-07-15 16:08:15.242473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.487 [2024-07-15 16:08:15.247055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.487 [2024-07-15 16:08:15.256234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.487 [2024-07-15 16:08:15.256732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.487 [2024-07-15 16:08:15.256763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.487 [2024-07-15 16:08:15.256780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.487 [2024-07-15 16:08:15.257088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.487 [2024-07-15 16:08:15.257390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.487 [2024-07-15 16:08:15.257413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.487 [2024-07-15 16:08:15.257428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.487 [2024-07-15 16:08:15.262009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.487 [2024-07-15 16:08:15.271179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.487 [2024-07-15 16:08:15.271717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.487 [2024-07-15 16:08:15.271757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.487 [2024-07-15 16:08:15.271777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.487 [2024-07-15 16:08:15.272104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.487 [2024-07-15 16:08:15.272406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.487 [2024-07-15 16:08:15.272429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.487 [2024-07-15 16:08:15.272444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.487 [2024-07-15 16:08:15.277020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.487 [2024-07-15 16:08:15.286186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.487 [2024-07-15 16:08:15.286711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.487 [2024-07-15 16:08:15.286762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.487 [2024-07-15 16:08:15.286779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.487 [2024-07-15 16:08:15.287086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.487 [2024-07-15 16:08:15.287387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.487 [2024-07-15 16:08:15.287411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.487 [2024-07-15 16:08:15.287425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.487 [2024-07-15 16:08:15.292009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.487 [2024-07-15 16:08:15.301179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.487 [2024-07-15 16:08:15.301729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.487 [2024-07-15 16:08:15.301760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.487 [2024-07-15 16:08:15.301776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.487 [2024-07-15 16:08:15.302085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.487 [2024-07-15 16:08:15.302387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.487 [2024-07-15 16:08:15.302411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.487 [2024-07-15 16:08:15.302425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.487 [2024-07-15 16:08:15.307003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.487 [2024-07-15 16:08:15.316197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.487 [2024-07-15 16:08:15.316741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.487 [2024-07-15 16:08:15.316768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.487 [2024-07-15 16:08:15.316783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.487 [2024-07-15 16:08:15.317105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.487 [2024-07-15 16:08:15.317407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.487 [2024-07-15 16:08:15.317435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.487 [2024-07-15 16:08:15.317451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.487 [2024-07-15 16:08:15.322027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.487 [2024-07-15 16:08:15.331192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.488 [2024-07-15 16:08:15.331690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.488 [2024-07-15 16:08:15.331721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.488 [2024-07-15 16:08:15.331738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.488 [2024-07-15 16:08:15.332044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.488 [2024-07-15 16:08:15.332345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.488 [2024-07-15 16:08:15.332369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.488 [2024-07-15 16:08:15.332384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.488 [2024-07-15 16:08:15.336960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.488 [2024-07-15 16:08:15.346129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.488 [2024-07-15 16:08:15.346600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.488 [2024-07-15 16:08:15.346631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.488 [2024-07-15 16:08:15.346649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.488 [2024-07-15 16:08:15.346958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.488 [2024-07-15 16:08:15.347259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.488 [2024-07-15 16:08:15.347282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.488 [2024-07-15 16:08:15.347297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.488 [2024-07-15 16:08:15.351865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.488 [2024-07-15 16:08:15.361038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.488 [2024-07-15 16:08:15.361533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.488 [2024-07-15 16:08:15.361564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.488 [2024-07-15 16:08:15.361581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.488 [2024-07-15 16:08:15.361888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.488 [2024-07-15 16:08:15.362190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.488 [2024-07-15 16:08:15.362214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.488 [2024-07-15 16:08:15.362229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.488 [2024-07-15 16:08:15.366792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.488 [2024-07-15 16:08:15.375976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.488 [2024-07-15 16:08:15.376434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.488 [2024-07-15 16:08:15.376464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.488 [2024-07-15 16:08:15.376482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.488 [2024-07-15 16:08:15.376778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.488 [2024-07-15 16:08:15.377117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.488 [2024-07-15 16:08:15.377142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.488 [2024-07-15 16:08:15.377157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.488 [2024-07-15 16:08:15.381721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.488 [2024-07-15 16:08:15.390904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.488 [2024-07-15 16:08:15.391386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.488 [2024-07-15 16:08:15.391416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.488 [2024-07-15 16:08:15.391433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.488 [2024-07-15 16:08:15.391729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.488 [2024-07-15 16:08:15.392043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.488 [2024-07-15 16:08:15.392067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.488 [2024-07-15 16:08:15.392082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.488 [2024-07-15 16:08:15.396647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.488 [2024-07-15 16:08:15.405829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.488 [2024-07-15 16:08:15.406340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.488 [2024-07-15 16:08:15.406381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.488 [2024-07-15 16:08:15.406398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.488 [2024-07-15 16:08:15.406714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.488 [2024-07-15 16:08:15.407047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.488 [2024-07-15 16:08:15.407071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.488 [2024-07-15 16:08:15.407086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.488 [2024-07-15 16:08:15.411708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.747 [2024-07-15 16:08:15.420884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.747 [2024-07-15 16:08:15.421536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.747 [2024-07-15 16:08:15.421592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.747 [2024-07-15 16:08:15.421610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.747 [2024-07-15 16:08:15.421925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.747 [2024-07-15 16:08:15.422230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.747 [2024-07-15 16:08:15.422253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.747 [2024-07-15 16:08:15.422268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.747 [2024-07-15 16:08:15.426833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.747 [2024-07-15 16:08:15.436003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.747 [2024-07-15 16:08:15.436495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.747 [2024-07-15 16:08:15.436525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.747 [2024-07-15 16:08:15.436542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.747 [2024-07-15 16:08:15.436838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.747 [2024-07-15 16:08:15.437148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.747 [2024-07-15 16:08:15.437173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.747 [2024-07-15 16:08:15.437188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.747 [2024-07-15 16:08:15.441754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.747 [2024-07-15 16:08:15.450925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.748 [2024-07-15 16:08:15.451467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.748 [2024-07-15 16:08:15.451509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.748 [2024-07-15 16:08:15.451525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.748 [2024-07-15 16:08:15.451841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.748 [2024-07-15 16:08:15.452151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.748 [2024-07-15 16:08:15.452175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.748 [2024-07-15 16:08:15.452190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.748 [2024-07-15 16:08:15.456752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.748 [2024-07-15 16:08:15.465927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.748 [2024-07-15 16:08:15.466434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.748 [2024-07-15 16:08:15.466465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.748 [2024-07-15 16:08:15.466483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.748 [2024-07-15 16:08:15.466779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.748 [2024-07-15 16:08:15.467094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.748 [2024-07-15 16:08:15.467118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.748 [2024-07-15 16:08:15.467139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.748 [2024-07-15 16:08:15.471708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.748 [2024-07-15 16:08:15.480869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.748 [2024-07-15 16:08:15.481407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.748 [2024-07-15 16:08:15.481434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.748 [2024-07-15 16:08:15.481464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.748 [2024-07-15 16:08:15.481776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.748 [2024-07-15 16:08:15.482092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.748 [2024-07-15 16:08:15.482116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.748 [2024-07-15 16:08:15.482130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.748 [2024-07-15 16:08:15.486698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.748 [2024-07-15 16:08:15.495894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.748 [2024-07-15 16:08:15.496340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.748 [2024-07-15 16:08:15.496371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.748 [2024-07-15 16:08:15.496388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.748 [2024-07-15 16:08:15.496684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.748 [2024-07-15 16:08:15.496995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.748 [2024-07-15 16:08:15.497019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.748 [2024-07-15 16:08:15.497033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.748 [2024-07-15 16:08:15.501599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.748 [2024-07-15 16:08:15.511030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.748 [2024-07-15 16:08:15.511620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.748 [2024-07-15 16:08:15.511668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.748 [2024-07-15 16:08:15.511686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.748 [2024-07-15 16:08:15.511994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.748 [2024-07-15 16:08:15.512296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.748 [2024-07-15 16:08:15.512319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.748 [2024-07-15 16:08:15.512333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.748 [2024-07-15 16:08:15.516905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.748 [2024-07-15 16:08:15.526084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.748 [2024-07-15 16:08:15.526688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.748 [2024-07-15 16:08:15.526743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.748 [2024-07-15 16:08:15.526762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.748 [2024-07-15 16:08:15.527068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.748 [2024-07-15 16:08:15.527370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.748 [2024-07-15 16:08:15.527394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.748 [2024-07-15 16:08:15.527408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.748 [2024-07-15 16:08:15.531979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.748 [2024-07-15 16:08:15.541131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.748 [2024-07-15 16:08:15.541627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.748 [2024-07-15 16:08:15.541657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.748 [2024-07-15 16:08:15.541674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.748 [2024-07-15 16:08:15.541982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.748 [2024-07-15 16:08:15.542283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.748 [2024-07-15 16:08:15.542306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.748 [2024-07-15 16:08:15.542321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.748 [2024-07-15 16:08:15.546885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.748 [2024-07-15 16:08:15.556029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.748 [2024-07-15 16:08:15.556505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.748 [2024-07-15 16:08:15.556535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.748 [2024-07-15 16:08:15.556552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.748 [2024-07-15 16:08:15.556848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.748 [2024-07-15 16:08:15.557158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.748 [2024-07-15 16:08:15.557182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.748 [2024-07-15 16:08:15.557197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.748 [2024-07-15 16:08:15.561758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.748 [2024-07-15 16:08:15.570916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.748 [2024-07-15 16:08:15.571412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.748 [2024-07-15 16:08:15.571443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.748 [2024-07-15 16:08:15.571459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.748 [2024-07-15 16:08:15.571755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.748 [2024-07-15 16:08:15.572077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.748 [2024-07-15 16:08:15.572101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.748 [2024-07-15 16:08:15.572116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.748 [2024-07-15 16:08:15.576679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.748 [2024-07-15 16:08:15.585829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.748 [2024-07-15 16:08:15.586302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.748 [2024-07-15 16:08:15.586332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.748 [2024-07-15 16:08:15.586349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.748 [2024-07-15 16:08:15.586645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.748 [2024-07-15 16:08:15.586957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.748 [2024-07-15 16:08:15.586981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.748 [2024-07-15 16:08:15.586996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.748 [2024-07-15 16:08:15.591561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.748 [2024-07-15 16:08:15.600708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.748 [2024-07-15 16:08:15.601251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.748 [2024-07-15 16:08:15.601293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.748 [2024-07-15 16:08:15.601308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.748 [2024-07-15 16:08:15.601587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.748 [2024-07-15 16:08:15.601832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.748 [2024-07-15 16:08:15.601851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.748 [2024-07-15 16:08:15.601888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.748 [2024-07-15 16:08:15.606453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.749 [2024-07-15 16:08:15.615390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.749 [2024-07-15 16:08:15.615895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-15 16:08:15.615945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.749 [2024-07-15 16:08:15.615962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.749 [2024-07-15 16:08:15.616286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.749 [2024-07-15 16:08:15.616587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.749 [2024-07-15 16:08:15.616610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.749 [2024-07-15 16:08:15.616625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.749 [2024-07-15 16:08:15.621308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.749 [2024-07-15 16:08:15.630488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.749 [2024-07-15 16:08:15.630972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-15 16:08:15.631001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.749 [2024-07-15 16:08:15.631016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.749 [2024-07-15 16:08:15.631314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.749 [2024-07-15 16:08:15.631615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.749 [2024-07-15 16:08:15.631639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.749 [2024-07-15 16:08:15.631653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.749 [2024-07-15 16:08:15.636226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.749 [2024-07-15 16:08:15.645393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.749 [2024-07-15 16:08:15.645941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-15 16:08:15.645973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.749 [2024-07-15 16:08:15.645990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.749 [2024-07-15 16:08:15.646286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.749 [2024-07-15 16:08:15.646586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.749 [2024-07-15 16:08:15.646610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.749 [2024-07-15 16:08:15.646624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.749 [2024-07-15 16:08:15.651201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.749 [2024-07-15 16:08:15.660367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.749 [2024-07-15 16:08:15.660862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-15 16:08:15.660904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.749 [2024-07-15 16:08:15.660922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.749 [2024-07-15 16:08:15.661218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.749 [2024-07-15 16:08:15.661518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.749 [2024-07-15 16:08:15.661541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.749 [2024-07-15 16:08:15.661556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.749 [2024-07-15 16:08:15.666129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.749 [2024-07-15 16:08:15.675299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.749 [2024-07-15 16:08:15.675815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-15 16:08:15.675866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:48.749 [2024-07-15 16:08:15.675900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:48.749 [2024-07-15 16:08:15.676198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:48.749 [2024-07-15 16:08:15.676499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.749 [2024-07-15 16:08:15.676522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.749 [2024-07-15 16:08:15.676536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.008 [2024-07-15 16:08:15.681115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.008 [2024-07-15 16:08:15.690278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.008 [2024-07-15 16:08:15.690781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.008 [2024-07-15 16:08:15.690811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.008 [2024-07-15 16:08:15.690828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.008 [2024-07-15 16:08:15.691136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.008 [2024-07-15 16:08:15.691437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.008 [2024-07-15 16:08:15.691460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.008 [2024-07-15 16:08:15.691475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.008 [2024-07-15 16:08:15.696163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.008 [2024-07-15 16:08:15.705309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.008 [2024-07-15 16:08:15.705818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.008 [2024-07-15 16:08:15.705859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.008 [2024-07-15 16:08:15.705884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.008 [2024-07-15 16:08:15.706191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.008 [2024-07-15 16:08:15.706493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.008 [2024-07-15 16:08:15.706516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.008 [2024-07-15 16:08:15.706530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.008 [2024-07-15 16:08:15.711099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.008 [2024-07-15 16:08:15.720238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.008 [2024-07-15 16:08:15.720898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.008 [2024-07-15 16:08:15.720958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.008 [2024-07-15 16:08:15.720977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.008 [2024-07-15 16:08:15.721273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.008 [2024-07-15 16:08:15.721573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.008 [2024-07-15 16:08:15.721605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.008 [2024-07-15 16:08:15.721621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.008 [2024-07-15 16:08:15.726187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.008 [2024-07-15 16:08:15.735328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.008 [2024-07-15 16:08:15.735829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.008 [2024-07-15 16:08:15.735859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.008 [2024-07-15 16:08:15.735885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.008 [2024-07-15 16:08:15.736183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.008 [2024-07-15 16:08:15.736484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.008 [2024-07-15 16:08:15.736507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.008 [2024-07-15 16:08:15.736521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.008 [2024-07-15 16:08:15.741087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.008 [2024-07-15 16:08:15.750229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.008 [2024-07-15 16:08:15.750807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.008 [2024-07-15 16:08:15.750834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.008 [2024-07-15 16:08:15.750849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.009 [2024-07-15 16:08:15.751178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.009 [2024-07-15 16:08:15.751478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.009 [2024-07-15 16:08:15.751501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.009 [2024-07-15 16:08:15.751516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.009 [2024-07-15 16:08:15.756079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.009 [2024-07-15 16:08:15.765220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.009 [2024-07-15 16:08:15.765722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.009 [2024-07-15 16:08:15.765752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.009 [2024-07-15 16:08:15.765769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.009 [2024-07-15 16:08:15.766076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.009 [2024-07-15 16:08:15.766377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.009 [2024-07-15 16:08:15.766400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.009 [2024-07-15 16:08:15.766414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.009 [2024-07-15 16:08:15.770979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.009 [2024-07-15 16:08:15.780128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.009 [2024-07-15 16:08:15.780604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.009 [2024-07-15 16:08:15.780635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.009 [2024-07-15 16:08:15.780652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.009 [2024-07-15 16:08:15.780959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.009 [2024-07-15 16:08:15.781259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.009 [2024-07-15 16:08:15.781283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.009 [2024-07-15 16:08:15.781298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.009 [2024-07-15 16:08:15.785852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.009 [2024-07-15 16:08:15.795002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.009 [2024-07-15 16:08:15.795453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.009 [2024-07-15 16:08:15.795484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.009 [2024-07-15 16:08:15.795501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.009 [2024-07-15 16:08:15.795797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.009 [2024-07-15 16:08:15.796108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.009 [2024-07-15 16:08:15.796132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.009 [2024-07-15 16:08:15.796148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.009 [2024-07-15 16:08:15.800702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.009 [2024-07-15 16:08:15.810114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.009 [2024-07-15 16:08:15.810592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.009 [2024-07-15 16:08:15.810622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.009 [2024-07-15 16:08:15.810639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.009 [2024-07-15 16:08:15.810946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.009 [2024-07-15 16:08:15.811247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.009 [2024-07-15 16:08:15.811270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.009 [2024-07-15 16:08:15.811285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.009 [2024-07-15 16:08:15.815838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.009 [2024-07-15 16:08:15.825294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.009 [2024-07-15 16:08:15.825794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.009 [2024-07-15 16:08:15.825824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.009 [2024-07-15 16:08:15.825841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.009 [2024-07-15 16:08:15.826152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.009 [2024-07-15 16:08:15.826454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.009 [2024-07-15 16:08:15.826477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.009 [2024-07-15 16:08:15.826492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.009 [2024-07-15 16:08:15.831057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.009 [2024-07-15 16:08:15.840208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.009 [2024-07-15 16:08:15.840714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.009 [2024-07-15 16:08:15.840745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.009 [2024-07-15 16:08:15.840762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.009 [2024-07-15 16:08:15.841067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.009 [2024-07-15 16:08:15.841368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.009 [2024-07-15 16:08:15.841391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.009 [2024-07-15 16:08:15.841406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.009 [2024-07-15 16:08:15.845969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.009 [2024-07-15 16:08:15.855112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.009 [2024-07-15 16:08:15.855631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.009 [2024-07-15 16:08:15.855672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.009 [2024-07-15 16:08:15.855688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.009 [2024-07-15 16:08:15.856012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.009 [2024-07-15 16:08:15.856313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.009 [2024-07-15 16:08:15.856336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.009 [2024-07-15 16:08:15.856351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.009 [2024-07-15 16:08:15.860913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.009 [2024-07-15 16:08:15.870050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.009 [2024-07-15 16:08:15.870552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.009 [2024-07-15 16:08:15.870592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.009 [2024-07-15 16:08:15.870608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.009 [2024-07-15 16:08:15.870911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.009 [2024-07-15 16:08:15.871212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.009 [2024-07-15 16:08:15.871235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.009 [2024-07-15 16:08:15.871255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.009 [2024-07-15 16:08:15.875813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.009 [2024-07-15 16:08:15.884968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.009 [2024-07-15 16:08:15.885473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.009 [2024-07-15 16:08:15.885503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.009 [2024-07-15 16:08:15.885520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.009 [2024-07-15 16:08:15.885815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.009 [2024-07-15 16:08:15.886126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.009 [2024-07-15 16:08:15.886150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.009 [2024-07-15 16:08:15.886165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.009 [2024-07-15 16:08:15.890753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.009 [2024-07-15 16:08:15.899904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.009 [2024-07-15 16:08:15.900416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.009 [2024-07-15 16:08:15.900443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.009 [2024-07-15 16:08:15.900473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.009 [2024-07-15 16:08:15.900788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.009 [2024-07-15 16:08:15.901099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.009 [2024-07-15 16:08:15.901123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.009 [2024-07-15 16:08:15.901137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.009 [2024-07-15 16:08:15.905695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.009 [2024-07-15 16:08:15.914835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.009 [2024-07-15 16:08:15.915342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.009 [2024-07-15 16:08:15.915383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.009 [2024-07-15 16:08:15.915399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.010 [2024-07-15 16:08:15.915698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.010 [2024-07-15 16:08:15.916009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.010 [2024-07-15 16:08:15.916033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.010 [2024-07-15 16:08:15.916048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.010 [2024-07-15 16:08:15.920603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.010 [2024-07-15 16:08:15.929742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.010 [2024-07-15 16:08:15.930264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.010 [2024-07-15 16:08:15.930300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.010 [2024-07-15 16:08:15.930318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.010 [2024-07-15 16:08:15.930614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.010 [2024-07-15 16:08:15.930925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.010 [2024-07-15 16:08:15.930949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.010 [2024-07-15 16:08:15.930964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.010 [2024-07-15 16:08:15.935519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.268 [2024-07-15 16:08:15.944665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.269 [2024-07-15 16:08:15.945171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.269 [2024-07-15 16:08:15.945202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.269 [2024-07-15 16:08:15.945219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.269 [2024-07-15 16:08:15.945515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.269 [2024-07-15 16:08:15.945815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.269 [2024-07-15 16:08:15.945838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.269 [2024-07-15 16:08:15.945853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.269 [2024-07-15 16:08:15.950416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.269 [2024-07-15 16:08:15.959562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.269 [2024-07-15 16:08:15.960067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.269 [2024-07-15 16:08:15.960097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.269 [2024-07-15 16:08:15.960114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.269 [2024-07-15 16:08:15.960410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.269 [2024-07-15 16:08:15.960711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.269 [2024-07-15 16:08:15.960733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.269 [2024-07-15 16:08:15.960748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.269 [2024-07-15 16:08:15.965317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.269 [2024-07-15 16:08:15.974472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.269 [2024-07-15 16:08:15.974946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.269 [2024-07-15 16:08:15.974978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.269 [2024-07-15 16:08:15.974995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.269 [2024-07-15 16:08:15.975291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.269 [2024-07-15 16:08:15.975597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.269 [2024-07-15 16:08:15.975621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.269 [2024-07-15 16:08:15.975636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.269 [2024-07-15 16:08:15.980201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.269 [2024-07-15 16:08:15.989610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.269 [2024-07-15 16:08:15.990124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.269 [2024-07-15 16:08:15.990150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.269 [2024-07-15 16:08:15.990182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.269 [2024-07-15 16:08:15.990497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.269 [2024-07-15 16:08:15.990798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.269 [2024-07-15 16:08:15.990820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.269 [2024-07-15 16:08:15.990835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.269 [2024-07-15 16:08:15.995401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.269 [2024-07-15 16:08:16.004548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.269 [2024-07-15 16:08:16.005031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.269 [2024-07-15 16:08:16.005072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.269 [2024-07-15 16:08:16.005087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.269 [2024-07-15 16:08:16.005395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.269 [2024-07-15 16:08:16.005696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.269 [2024-07-15 16:08:16.005719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.269 [2024-07-15 16:08:16.005734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.269 [2024-07-15 16:08:16.010298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.269 [2024-07-15 16:08:16.019439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.269 [2024-07-15 16:08:16.020000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.269 [2024-07-15 16:08:16.020031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.269 [2024-07-15 16:08:16.020048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.269 [2024-07-15 16:08:16.020345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.269 [2024-07-15 16:08:16.020644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.269 [2024-07-15 16:08:16.020667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.269 [2024-07-15 16:08:16.020682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.269 [2024-07-15 16:08:16.025250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.269 [2024-07-15 16:08:16.034448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.269 [2024-07-15 16:08:16.034949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.269 [2024-07-15 16:08:16.034981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.269 [2024-07-15 16:08:16.034998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.269 [2024-07-15 16:08:16.035294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.269 [2024-07-15 16:08:16.035595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.269 [2024-07-15 16:08:16.035618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.269 [2024-07-15 16:08:16.035633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.269 [2024-07-15 16:08:16.040200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.269 [2024-07-15 16:08:16.049343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.269 [2024-07-15 16:08:16.049845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.269 [2024-07-15 16:08:16.049893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.269 [2024-07-15 16:08:16.049912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.269 [2024-07-15 16:08:16.050212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.269 [2024-07-15 16:08:16.050513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.269 [2024-07-15 16:08:16.050536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.269 [2024-07-15 16:08:16.050551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.269 [2024-07-15 16:08:16.055118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.269 [2024-07-15 16:08:16.064262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.269 [2024-07-15 16:08:16.064773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.269 [2024-07-15 16:08:16.064799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.269 [2024-07-15 16:08:16.064814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.269 [2024-07-15 16:08:16.065124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.269 [2024-07-15 16:08:16.065425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.269 [2024-07-15 16:08:16.065448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.269 [2024-07-15 16:08:16.065463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.269 [2024-07-15 16:08:16.070029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.269 [2024-07-15 16:08:16.079170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.269 [2024-07-15 16:08:16.079774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.269 [2024-07-15 16:08:16.079825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.269 [2024-07-15 16:08:16.079848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.269 [2024-07-15 16:08:16.080155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.269 [2024-07-15 16:08:16.080455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.269 [2024-07-15 16:08:16.080479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.269 [2024-07-15 16:08:16.080493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.269 [2024-07-15 16:08:16.085196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.269 [2024-07-15 16:08:16.094073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.269 [2024-07-15 16:08:16.094556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.269 [2024-07-15 16:08:16.094597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.269 [2024-07-15 16:08:16.094612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.269 [2024-07-15 16:08:16.094928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.269 [2024-07-15 16:08:16.095230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.269 [2024-07-15 16:08:16.095253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.269 [2024-07-15 16:08:16.095268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.270 [2024-07-15 16:08:16.099825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.270 [2024-07-15 16:08:16.108974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.270 [2024-07-15 16:08:16.109481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.270 [2024-07-15 16:08:16.109522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.270 [2024-07-15 16:08:16.109539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.270 [2024-07-15 16:08:16.109848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.270 [2024-07-15 16:08:16.110158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.270 [2024-07-15 16:08:16.110182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.270 [2024-07-15 16:08:16.110197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.270 [2024-07-15 16:08:16.114751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.270 [2024-07-15 16:08:16.123900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.270 [2024-07-15 16:08:16.124395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.270 [2024-07-15 16:08:16.124425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.270 [2024-07-15 16:08:16.124442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.270 [2024-07-15 16:08:16.124737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.270 [2024-07-15 16:08:16.125048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.270 [2024-07-15 16:08:16.125078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.270 [2024-07-15 16:08:16.125094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.270 [2024-07-15 16:08:16.129647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.270 [2024-07-15 16:08:16.138788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.270 [2024-07-15 16:08:16.139307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.270 [2024-07-15 16:08:16.139339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.270 [2024-07-15 16:08:16.139357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.270 [2024-07-15 16:08:16.139653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.270 [2024-07-15 16:08:16.139965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.270 [2024-07-15 16:08:16.139989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.270 [2024-07-15 16:08:16.140004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.270 [2024-07-15 16:08:16.144561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.270 [2024-07-15 16:08:16.153704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.270 [2024-07-15 16:08:16.154221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.270 [2024-07-15 16:08:16.154252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.270 [2024-07-15 16:08:16.154269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.270 [2024-07-15 16:08:16.154565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.270 [2024-07-15 16:08:16.154865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.270 [2024-07-15 16:08:16.154899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.270 [2024-07-15 16:08:16.154915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.270 [2024-07-15 16:08:16.159470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.270 [2024-07-15 16:08:16.168611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.270 [2024-07-15 16:08:16.169090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.270 [2024-07-15 16:08:16.169121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.270 [2024-07-15 16:08:16.169139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.270 [2024-07-15 16:08:16.169434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.270 [2024-07-15 16:08:16.169734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.270 [2024-07-15 16:08:16.169757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.270 [2024-07-15 16:08:16.169771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.270 [2024-07-15 16:08:16.174337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.270 [2024-07-15 16:08:16.183491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.270 [2024-07-15 16:08:16.183990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.270 [2024-07-15 16:08:16.184021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.270 [2024-07-15 16:08:16.184038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.270 [2024-07-15 16:08:16.184333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.270 [2024-07-15 16:08:16.184633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.270 [2024-07-15 16:08:16.184656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.270 [2024-07-15 16:08:16.184671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.270 [2024-07-15 16:08:16.189235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.270 [2024-07-15 16:08:16.198402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.270 [2024-07-15 16:08:16.198913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.270 [2024-07-15 16:08:16.198940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.270 [2024-07-15 16:08:16.198955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.555 [2024-07-15 16:08:16.199273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.555 [2024-07-15 16:08:16.199586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.555 [2024-07-15 16:08:16.199610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.555 [2024-07-15 16:08:16.199625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.555 [2024-07-15 16:08:16.204193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.555 [2024-07-15 16:08:16.213351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.555 [2024-07-15 16:08:16.213819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.555 [2024-07-15 16:08:16.213849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.555 [2024-07-15 16:08:16.213867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.555 [2024-07-15 16:08:16.214173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.555 [2024-07-15 16:08:16.214473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.555 [2024-07-15 16:08:16.214496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.555 [2024-07-15 16:08:16.214511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.555 [2024-07-15 16:08:16.219075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.555 [2024-07-15 16:08:16.228484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.555 [2024-07-15 16:08:16.228955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.555 [2024-07-15 16:08:16.228985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.555 [2024-07-15 16:08:16.229002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.555 [2024-07-15 16:08:16.229304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.555 [2024-07-15 16:08:16.229604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.555 [2024-07-15 16:08:16.229627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.555 [2024-07-15 16:08:16.229641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.555 [2024-07-15 16:08:16.234206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.555 [2024-07-15 16:08:16.243668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.555 [2024-07-15 16:08:16.244184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.555 [2024-07-15 16:08:16.244226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.555 [2024-07-15 16:08:16.244241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.555 [2024-07-15 16:08:16.244568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.555 [2024-07-15 16:08:16.244868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.555 [2024-07-15 16:08:16.244902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.555 [2024-07-15 16:08:16.244918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.555 [2024-07-15 16:08:16.249476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.555 [2024-07-15 16:08:16.258616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.555 [2024-07-15 16:08:16.259116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.555 [2024-07-15 16:08:16.259142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.555 [2024-07-15 16:08:16.259156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.555 [2024-07-15 16:08:16.259450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.555 [2024-07-15 16:08:16.259750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.555 [2024-07-15 16:08:16.259773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.555 [2024-07-15 16:08:16.259788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.555 [2024-07-15 16:08:16.264355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.555 [2024-07-15 16:08:16.273511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.555 [2024-07-15 16:08:16.274000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.555 [2024-07-15 16:08:16.274031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.555 [2024-07-15 16:08:16.274049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.555 [2024-07-15 16:08:16.274345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.555 [2024-07-15 16:08:16.274645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.555 [2024-07-15 16:08:16.274668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.555 [2024-07-15 16:08:16.274689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.555 [2024-07-15 16:08:16.279259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.555 [2024-07-15 16:08:16.288402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.555 [2024-07-15 16:08:16.288908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.555 [2024-07-15 16:08:16.288939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.555 [2024-07-15 16:08:16.288956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.555 [2024-07-15 16:08:16.289253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.555 [2024-07-15 16:08:16.289553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.555 [2024-07-15 16:08:16.289576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.555 [2024-07-15 16:08:16.289591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.555 [2024-07-15 16:08:16.294165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.555 [2024-07-15 16:08:16.303306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.555 [2024-07-15 16:08:16.303803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.555 [2024-07-15 16:08:16.303834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.555 [2024-07-15 16:08:16.303851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.555 [2024-07-15 16:08:16.304156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.555 [2024-07-15 16:08:16.304457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.555 [2024-07-15 16:08:16.304480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.555 [2024-07-15 16:08:16.304495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.555 [2024-07-15 16:08:16.309059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.555 [2024-07-15 16:08:16.318197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.555 [2024-07-15 16:08:16.318700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.555 [2024-07-15 16:08:16.318725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.555 [2024-07-15 16:08:16.318755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.555 [2024-07-15 16:08:16.319078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.555 [2024-07-15 16:08:16.319380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.555 [2024-07-15 16:08:16.319403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.555 [2024-07-15 16:08:16.319418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.555 [2024-07-15 16:08:16.323984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.555 [2024-07-15 16:08:16.333123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.555 [2024-07-15 16:08:16.333620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.555 [2024-07-15 16:08:16.333655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.555 [2024-07-15 16:08:16.333673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.555 [2024-07-15 16:08:16.333981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.555 [2024-07-15 16:08:16.334282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.555 [2024-07-15 16:08:16.334305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.555 [2024-07-15 16:08:16.334320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.555 [2024-07-15 16:08:16.338897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.555 [2024-07-15 16:08:16.348050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.555 [2024-07-15 16:08:16.348544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.556 [2024-07-15 16:08:16.348574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.556 [2024-07-15 16:08:16.348592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.556 [2024-07-15 16:08:16.348897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.556 [2024-07-15 16:08:16.349198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.556 [2024-07-15 16:08:16.349221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.556 [2024-07-15 16:08:16.349236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.556 [2024-07-15 16:08:16.353792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.556 [2024-07-15 16:08:16.362967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.556 [2024-07-15 16:08:16.363469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.556 [2024-07-15 16:08:16.363517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.556 [2024-07-15 16:08:16.363534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.556 [2024-07-15 16:08:16.363830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.556 [2024-07-15 16:08:16.364139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.556 [2024-07-15 16:08:16.364163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.556 [2024-07-15 16:08:16.364177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.556 [2024-07-15 16:08:16.368756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.556 [2024-07-15 16:08:16.378014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.556 [2024-07-15 16:08:16.378529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.556 [2024-07-15 16:08:16.378559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.556 [2024-07-15 16:08:16.378576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.556 [2024-07-15 16:08:16.378871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.556 [2024-07-15 16:08:16.379191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.556 [2024-07-15 16:08:16.379212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.556 [2024-07-15 16:08:16.379241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.556 [2024-07-15 16:08:16.383792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.556 [2024-07-15 16:08:16.393002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.556 [2024-07-15 16:08:16.393477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.556 [2024-07-15 16:08:16.393504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.556 [2024-07-15 16:08:16.393519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.556 [2024-07-15 16:08:16.393834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.556 [2024-07-15 16:08:16.394138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.556 [2024-07-15 16:08:16.394174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.556 [2024-07-15 16:08:16.394190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.556 [2024-07-15 16:08:16.398804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.556 [2024-07-15 16:08:16.407968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.556 [2024-07-15 16:08:16.408482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.556 [2024-07-15 16:08:16.408531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.556 [2024-07-15 16:08:16.408549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.556 [2024-07-15 16:08:16.408844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.556 [2024-07-15 16:08:16.409153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.556 [2024-07-15 16:08:16.409178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.556 [2024-07-15 16:08:16.409193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.556 [2024-07-15 16:08:16.413754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.556 [2024-07-15 16:08:16.422917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.556 [2024-07-15 16:08:16.423421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.556 [2024-07-15 16:08:16.423452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.556 [2024-07-15 16:08:16.423469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.556 [2024-07-15 16:08:16.423764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.556 [2024-07-15 16:08:16.424075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.556 [2024-07-15 16:08:16.424099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.556 [2024-07-15 16:08:16.424114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.556 [2024-07-15 16:08:16.428680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.556 [2024-07-15 16:08:16.437923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.556 [2024-07-15 16:08:16.438457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.556 [2024-07-15 16:08:16.438484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.556 [2024-07-15 16:08:16.438499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.556 [2024-07-15 16:08:16.438798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.556 [2024-07-15 16:08:16.439108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.556 [2024-07-15 16:08:16.439132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.556 [2024-07-15 16:08:16.439147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.556 [2024-07-15 16:08:16.443674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.556 [2024-07-15 16:08:16.452985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.556 [2024-07-15 16:08:16.454185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.556 [2024-07-15 16:08:16.454214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.556 [2024-07-15 16:08:16.454249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.556 [2024-07-15 16:08:16.454547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.556 [2024-07-15 16:08:16.454849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.556 [2024-07-15 16:08:16.454872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.556 [2024-07-15 16:08:16.454896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.556 [2024-07-15 16:08:16.459500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.556 [2024-07-15 16:08:16.467900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.556 [2024-07-15 16:08:16.468408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.556 [2024-07-15 16:08:16.468434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.556 [2024-07-15 16:08:16.468449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.556 [2024-07-15 16:08:16.468742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.556 [2024-07-15 16:08:16.469054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.556 [2024-07-15 16:08:16.469079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.556 [2024-07-15 16:08:16.469093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.556 [2024-07-15 16:08:16.473648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.556 [2024-07-15 16:08:16.482816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.556 [2024-07-15 16:08:16.483351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.556 [2024-07-15 16:08:16.483400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.556 [2024-07-15 16:08:16.483422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.556 [2024-07-15 16:08:16.483719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.556 [2024-07-15 16:08:16.484031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.556 [2024-07-15 16:08:16.484055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.556 [2024-07-15 16:08:16.484069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.815 [2024-07-15 16:08:16.488627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.815 [2024-07-15 16:08:16.497750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.815 [2024-07-15 16:08:16.498225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.815 [2024-07-15 16:08:16.498256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.815 [2024-07-15 16:08:16.498274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.815 [2024-07-15 16:08:16.498569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.815 [2024-07-15 16:08:16.498869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.815 [2024-07-15 16:08:16.498902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.815 [2024-07-15 16:08:16.498919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.815 [2024-07-15 16:08:16.503481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.815 [2024-07-15 16:08:16.512896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.815 [2024-07-15 16:08:16.513441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.815 [2024-07-15 16:08:16.513468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.815 [2024-07-15 16:08:16.513483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.815 [2024-07-15 16:08:16.513792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.815 [2024-07-15 16:08:16.514102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.815 [2024-07-15 16:08:16.514126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.815 [2024-07-15 16:08:16.514141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.815 [2024-07-15 16:08:16.518703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.815 [2024-07-15 16:08:16.527868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.815 [2024-07-15 16:08:16.528410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.815 [2024-07-15 16:08:16.528458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.815 [2024-07-15 16:08:16.528476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.815 [2024-07-15 16:08:16.528772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.815 [2024-07-15 16:08:16.529107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.815 [2024-07-15 16:08:16.529137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.815 [2024-07-15 16:08:16.529154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.815 [2024-07-15 16:08:16.533709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.815 [2024-07-15 16:08:16.542865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.815 [2024-07-15 16:08:16.543432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.815 [2024-07-15 16:08:16.543481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.815 [2024-07-15 16:08:16.543498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.815 [2024-07-15 16:08:16.543794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.815 [2024-07-15 16:08:16.544104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.815 [2024-07-15 16:08:16.544128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.815 [2024-07-15 16:08:16.544143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.815 [2024-07-15 16:08:16.548701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.815 [2024-07-15 16:08:16.557863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.815 [2024-07-15 16:08:16.558382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.815 [2024-07-15 16:08:16.558413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.815 [2024-07-15 16:08:16.558430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.815 [2024-07-15 16:08:16.558725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.815 [2024-07-15 16:08:16.559036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.815 [2024-07-15 16:08:16.559060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.815 [2024-07-15 16:08:16.559075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.815 [2024-07-15 16:08:16.563629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.816 [2024-07-15 16:08:16.572795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.816 [2024-07-15 16:08:16.573317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.816 [2024-07-15 16:08:16.573348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.816 [2024-07-15 16:08:16.573366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.816 [2024-07-15 16:08:16.573662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.816 [2024-07-15 16:08:16.573972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.816 [2024-07-15 16:08:16.573996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.816 [2024-07-15 16:08:16.574011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.816 [2024-07-15 16:08:16.578566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.816 [2024-07-15 16:08:16.587717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.816 [2024-07-15 16:08:16.588265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.816 [2024-07-15 16:08:16.588314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.816 [2024-07-15 16:08:16.588331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.816 [2024-07-15 16:08:16.588627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.816 [2024-07-15 16:08:16.588936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.816 [2024-07-15 16:08:16.588960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.816 [2024-07-15 16:08:16.588975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.816 [2024-07-15 16:08:16.593543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.816 [2024-07-15 16:08:16.602690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.816 [2024-07-15 16:08:16.603192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.816 [2024-07-15 16:08:16.603241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.816 [2024-07-15 16:08:16.603259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.816 [2024-07-15 16:08:16.603555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.816 [2024-07-15 16:08:16.603855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.816 [2024-07-15 16:08:16.603885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.816 [2024-07-15 16:08:16.603904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.816 [2024-07-15 16:08:16.608466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.816 [2024-07-15 16:08:16.617617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.816 [2024-07-15 16:08:16.618094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.816 [2024-07-15 16:08:16.618125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.816 [2024-07-15 16:08:16.618142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.816 [2024-07-15 16:08:16.618437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.816 [2024-07-15 16:08:16.618739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.816 [2024-07-15 16:08:16.618762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.816 [2024-07-15 16:08:16.618776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.816 [2024-07-15 16:08:16.623372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.816 [2024-07-15 16:08:16.632537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.816 [2024-07-15 16:08:16.633056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.816 [2024-07-15 16:08:16.633087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.816 [2024-07-15 16:08:16.633105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.816 [2024-07-15 16:08:16.633407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.816 [2024-07-15 16:08:16.633707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.816 [2024-07-15 16:08:16.633730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.816 [2024-07-15 16:08:16.633745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.816 [2024-07-15 16:08:16.638317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1258463 Killed "${NVMF_APP[@]}" "$@" 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1259552 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1259552 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1259552 ']' 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:49.816 16:08:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:49.816 [2024-07-15 16:08:16.647439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.816 [2024-07-15 16:08:16.648005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.816 [2024-07-15 16:08:16.648033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.816 [2024-07-15 16:08:16.648048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.816 [2024-07-15 16:08:16.648340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.816 [2024-07-15 16:08:16.648601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.816 [2024-07-15 16:08:16.648620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.816 [2024-07-15 16:08:16.648632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.816 [2024-07-15 16:08:16.652573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.816 [2024-07-15 16:08:16.661722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.816 [2024-07-15 16:08:16.662199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.816 [2024-07-15 16:08:16.662226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.816 [2024-07-15 16:08:16.662241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.816 [2024-07-15 16:08:16.662531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.816 [2024-07-15 16:08:16.662781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.816 [2024-07-15 16:08:16.662800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.816 [2024-07-15 16:08:16.662812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.816 [2024-07-15 16:08:16.666759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.816 [2024-07-15 16:08:16.676117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.816 [2024-07-15 16:08:16.676562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.816 [2024-07-15 16:08:16.676589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.816 [2024-07-15 16:08:16.676604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.816 [2024-07-15 16:08:16.676914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.816 [2024-07-15 16:08:16.677168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.816 [2024-07-15 16:08:16.677202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.816 [2024-07-15 16:08:16.677215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.816 [2024-07-15 16:08:16.681045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.816 [2024-07-15 16:08:16.689280] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:49.816 [2024-07-15 16:08:16.689352] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.816 [2024-07-15 16:08:16.690352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.816 [2024-07-15 16:08:16.690826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.817 [2024-07-15 16:08:16.690853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.817 [2024-07-15 16:08:16.690869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.817 [2024-07-15 16:08:16.691171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.817 [2024-07-15 16:08:16.691433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.817 [2024-07-15 16:08:16.691452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.817 [2024-07-15 16:08:16.691465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.817 [2024-07-15 16:08:16.695311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.817 [2024-07-15 16:08:16.704434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.817 [2024-07-15 16:08:16.704902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.817 [2024-07-15 16:08:16.704930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.817 [2024-07-15 16:08:16.704946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.817 [2024-07-15 16:08:16.705243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.817 [2024-07-15 16:08:16.705525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.817 [2024-07-15 16:08:16.705546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.817 [2024-07-15 16:08:16.705559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.817 [2024-07-15 16:08:16.709637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.817 [2024-07-15 16:08:16.718562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.817 [2024-07-15 16:08:16.719063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.817 [2024-07-15 16:08:16.719091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.817 [2024-07-15 16:08:16.719107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.817 [2024-07-15 16:08:16.719412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.817 [2024-07-15 16:08:16.719658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.817 [2024-07-15 16:08:16.719678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.817 [2024-07-15 16:08:16.719690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.817 [2024-07-15 16:08:16.723539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.817 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.817 [2024-07-15 16:08:16.733659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.817 [2024-07-15 16:08:16.734135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.817 [2024-07-15 16:08:16.734163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:49.817 [2024-07-15 16:08:16.734179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:49.817 [2024-07-15 16:08:16.734475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:49.817 [2024-07-15 16:08:16.734776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.817 [2024-07-15 16:08:16.734799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.817 [2024-07-15 16:08:16.734814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.817 [2024-07-15 16:08:16.739385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.078 [2024-07-15 16:08:16.748827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.078 [2024-07-15 16:08:16.749374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.078 [2024-07-15 16:08:16.749401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.078 [2024-07-15 16:08:16.749417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.078 [2024-07-15 16:08:16.749731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.078 [2024-07-15 16:08:16.750046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.078 [2024-07-15 16:08:16.750069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.078 [2024-07-15 16:08:16.750098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.078 [2024-07-15 16:08:16.754638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.078 [2024-07-15 16:08:16.760969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:50.078 [2024-07-15 16:08:16.763656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.078 [2024-07-15 16:08:16.764172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.078 [2024-07-15 16:08:16.764200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.078 [2024-07-15 16:08:16.764216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.078 [2024-07-15 16:08:16.764529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.078 [2024-07-15 16:08:16.764831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.078 [2024-07-15 16:08:16.764854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.078 [2024-07-15 16:08:16.764869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.078 [2024-07-15 16:08:16.769395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.078 [2024-07-15 16:08:16.778672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.078 [2024-07-15 16:08:16.779375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.078 [2024-07-15 16:08:16.779415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.078 [2024-07-15 16:08:16.779435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.078 [2024-07-15 16:08:16.779765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.078 [2024-07-15 16:08:16.780080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.078 [2024-07-15 16:08:16.780105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.078 [2024-07-15 16:08:16.780124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.078 [2024-07-15 16:08:16.784621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.078 [2024-07-15 16:08:16.793616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.078 [2024-07-15 16:08:16.794160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.078 [2024-07-15 16:08:16.794189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.078 [2024-07-15 16:08:16.794205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.078 [2024-07-15 16:08:16.794507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.078 [2024-07-15 16:08:16.794808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.078 [2024-07-15 16:08:16.794831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.078 [2024-07-15 16:08:16.794847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.078 [2024-07-15 16:08:16.799346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.078 [2024-07-15 16:08:16.808564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.078 [2024-07-15 16:08:16.809078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.078 [2024-07-15 16:08:16.809117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.078 [2024-07-15 16:08:16.809135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.078 [2024-07-15 16:08:16.809445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.078 [2024-07-15 16:08:16.809746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.078 [2024-07-15 16:08:16.809770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.078 [2024-07-15 16:08:16.809784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.078 [2024-07-15 16:08:16.814277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.078 [2024-07-15 16:08:16.823519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.078 [2024-07-15 16:08:16.824141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.078 [2024-07-15 16:08:16.824175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.078 [2024-07-15 16:08:16.824192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.078 [2024-07-15 16:08:16.824515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.078 [2024-07-15 16:08:16.824821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.078 [2024-07-15 16:08:16.824844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.078 [2024-07-15 16:08:16.824862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.078 [2024-07-15 16:08:16.829375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.078 [2024-07-15 16:08:16.838188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.078 [2024-07-15 16:08:16.838827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.078 [2024-07-15 16:08:16.838870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.078 [2024-07-15 16:08:16.838897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.078 [2024-07-15 16:08:16.839196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.078 [2024-07-15 16:08:16.839507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.078 [2024-07-15 16:08:16.839531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.078 [2024-07-15 16:08:16.839549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.078 [2024-07-15 16:08:16.844052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.078 [2024-07-15 16:08:16.853147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.078 [2024-07-15 16:08:16.853713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.078 [2024-07-15 16:08:16.853741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.078 [2024-07-15 16:08:16.853757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.078 [2024-07-15 16:08:16.854076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.078 [2024-07-15 16:08:16.854398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.078 [2024-07-15 16:08:16.854422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.078 [2024-07-15 16:08:16.854437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.078 [2024-07-15 16:08:16.858992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.078 [2024-07-15 16:08:16.868189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.078 [2024-07-15 16:08:16.868707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.078 [2024-07-15 16:08:16.868735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.078 [2024-07-15 16:08:16.868751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.078 [2024-07-15 16:08:16.869062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.078 [2024-07-15 16:08:16.869366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.078 [2024-07-15 16:08:16.869391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.078 [2024-07-15 16:08:16.869406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.078 [2024-07-15 16:08:16.873986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.078 [2024-07-15 16:08:16.878560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.078 [2024-07-15 16:08:16.878595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.078 [2024-07-15 16:08:16.878611] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.078 [2024-07-15 16:08:16.878623] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.078 [2024-07-15 16:08:16.878635] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.078 [2024-07-15 16:08:16.878718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.078 [2024-07-15 16:08:16.878912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.078 [2024-07-15 16:08:16.878933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.078 [2024-07-15 16:08:16.882691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.078 [2024-07-15 16:08:16.883210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.078 [2024-07-15 16:08:16.883241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.078 [2024-07-15 16:08:16.883258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.079 [2024-07-15 16:08:16.883542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.079 [2024-07-15 16:08:16.883807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.079 [2024-07-15 16:08:16.883828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.079 [2024-07-15 16:08:16.883843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.079 [2024-07-15 16:08:16.887908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.079 [2024-07-15 16:08:16.897317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.079 [2024-07-15 16:08:16.898009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.079 [2024-07-15 16:08:16.898055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.079 [2024-07-15 16:08:16.898075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.079 [2024-07-15 16:08:16.898368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.079 [2024-07-15 16:08:16.898660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.079 [2024-07-15 16:08:16.898683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.079 [2024-07-15 16:08:16.898700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.079 [2024-07-15 16:08:16.902643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.079 [2024-07-15 16:08:16.911774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.079 [2024-07-15 16:08:16.912494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.079 [2024-07-15 16:08:16.912537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.079 [2024-07-15 16:08:16.912558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.079 [2024-07-15 16:08:16.912847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.079 [2024-07-15 16:08:16.913150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.079 [2024-07-15 16:08:16.913187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.079 [2024-07-15 16:08:16.913214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.079 [2024-07-15 16:08:16.917297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.079 [2024-07-15 16:08:16.926434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.079 [2024-07-15 16:08:16.927089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.079 [2024-07-15 16:08:16.927133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.079 [2024-07-15 16:08:16.927154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.079 [2024-07-15 16:08:16.927445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.079 [2024-07-15 16:08:16.927713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.079 [2024-07-15 16:08:16.927734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.079 [2024-07-15 16:08:16.927751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.079 [2024-07-15 16:08:16.931795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.079 [2024-07-15 16:08:16.940936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.079 [2024-07-15 16:08:16.941663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.079 [2024-07-15 16:08:16.941702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.079 [2024-07-15 16:08:16.941721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.079 [2024-07-15 16:08:16.942018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.079 [2024-07-15 16:08:16.942298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.079 [2024-07-15 16:08:16.942320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.079 [2024-07-15 16:08:16.942337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.079 [2024-07-15 16:08:16.946418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.079 [2024-07-15 16:08:16.955375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.079 [2024-07-15 16:08:16.956028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.079 [2024-07-15 16:08:16.956073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.079 [2024-07-15 16:08:16.956094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.079 [2024-07-15 16:08:16.956385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.079 [2024-07-15 16:08:16.956654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.079 [2024-07-15 16:08:16.956676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.079 [2024-07-15 16:08:16.956693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.079 [2024-07-15 16:08:16.960864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.079 [2024-07-15 16:08:16.970026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.079 [2024-07-15 16:08:16.970547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.079 [2024-07-15 16:08:16.970577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.079 [2024-07-15 16:08:16.970594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.079 [2024-07-15 16:08:16.970897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.079 [2024-07-15 16:08:16.971185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.079 [2024-07-15 16:08:16.971205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.079 [2024-07-15 16:08:16.971220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.079 [2024-07-15 16:08:16.975294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.079 [2024-07-15 16:08:16.984504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.079 [2024-07-15 16:08:16.985019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.079 [2024-07-15 16:08:16.985047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.079 [2024-07-15 16:08:16.985063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.079 [2024-07-15 16:08:16.985344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.079 [2024-07-15 16:08:16.985606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.079 [2024-07-15 16:08:16.985627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.079 [2024-07-15 16:08:16.985640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.079 [2024-07-15 16:08:16.989670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.079 [2024-07-15 16:08:16.999035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.079 [2024-07-15 16:08:16.999494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.079 [2024-07-15 16:08:16.999521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.079 [2024-07-15 16:08:16.999537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.079 [2024-07-15 16:08:16.999817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.079 [2024-07-15 16:08:17.000111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.079 [2024-07-15 16:08:17.000133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.079 [2024-07-15 16:08:17.000146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.079 [2024-07-15 16:08:17.004359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.339 [2024-07-15 16:08:17.013803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.339 [2024-07-15 16:08:17.014245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.339 [2024-07-15 16:08:17.014274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.339 [2024-07-15 16:08:17.014290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.339 [2024-07-15 16:08:17.014567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.339 [2024-07-15 16:08:17.014829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.339 [2024-07-15 16:08:17.014850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.339 [2024-07-15 16:08:17.014863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.339 [2024-07-15 16:08:17.019009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.339 [2024-07-15 16:08:17.028333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.339 [2024-07-15 16:08:17.028745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.339 [2024-07-15 16:08:17.028772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.339 [2024-07-15 16:08:17.028787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.339 [2024-07-15 16:08:17.029062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.339 [2024-07-15 16:08:17.029343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.339 [2024-07-15 16:08:17.029364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.339 [2024-07-15 16:08:17.029377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.339 [2024-07-15 16:08:17.033450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.339 [2024-07-15 16:08:17.042780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.339 [2024-07-15 16:08:17.043238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.339 [2024-07-15 16:08:17.043266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.339 [2024-07-15 16:08:17.043286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.339 [2024-07-15 16:08:17.043567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.043829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.043849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.043886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.047948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.057269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.057745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.057773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.057788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.058061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.058342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.058363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.058376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.062449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.071777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.072238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.072265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.072280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.072559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.072820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.072841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.072869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.077082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.086199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.086648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.086677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.086692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.086987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.087252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.087277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.087291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.091372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.100743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.101213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.101241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.101257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.101536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.101797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.101817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.101830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.106050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.115145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.115607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.115635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.115650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.115941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.116203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.116224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.116236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.120306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.129637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.130109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.130137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.130152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.130432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.130694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.130714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.130727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.134760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.144048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.144470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.144498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.144513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.144793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.145064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.145085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.145099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.149178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.158516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.158959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.158988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.159003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.159284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.159547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.159567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.159579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.163578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.172906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.173357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.173384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.173400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.173680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.173970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.173992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.174006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.178100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.187476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.187944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.187972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.187988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.188275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.188538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.188558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.188571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.192618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.201979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.202403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.202430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.202445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.202725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.202996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.203017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.203030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.207080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.216456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.216968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.216996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.217011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.217292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.217577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.217598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.217612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.221778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.230867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.231303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.231330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.231345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.231624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.231911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.231933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.231951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.236004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.245337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.245802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.245829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.245845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.246120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.246399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.246420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.246433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.250538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.340 [2024-07-15 16:08:17.259835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.340 [2024-07-15 16:08:17.260362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-07-15 16:08:17.260390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-07-15 16:08:17.260405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.340 [2024-07-15 16:08:17.260686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.340 [2024-07-15 16:08:17.260956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.340 [2024-07-15 16:08:17.260977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.340 [2024-07-15 16:08:17.260990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.340 [2024-07-15 16:08:17.265127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.599 [2024-07-15 16:08:17.274557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.599 [2024-07-15 16:08:17.275003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.599 [2024-07-15 16:08:17.275032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.599 [2024-07-15 16:08:17.275047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.599 [2024-07-15 16:08:17.275313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.599 [2024-07-15 16:08:17.275595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.599 [2024-07-15 16:08:17.275615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.600 [2024-07-15 16:08:17.275629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.600 [2024-07-15 16:08:17.279673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.600 [2024-07-15 16:08:17.289113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.600 [2024-07-15 16:08:17.289536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.600 [2024-07-15 16:08:17.289569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.600 [2024-07-15 16:08:17.289585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.600 [2024-07-15 16:08:17.289863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.600 [2024-07-15 16:08:17.290141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.600 [2024-07-15 16:08:17.290161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.600 [2024-07-15 16:08:17.290174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.600 [2024-07-15 16:08:17.294245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.600 [2024-07-15 16:08:17.303612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.600 [2024-07-15 16:08:17.304068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.600 [2024-07-15 16:08:17.304097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.600 [2024-07-15 16:08:17.304112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.600 [2024-07-15 16:08:17.304393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.600 [2024-07-15 16:08:17.304656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.600 [2024-07-15 16:08:17.304676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.600 [2024-07-15 16:08:17.304689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.600 [2024-07-15 16:08:17.308722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.600 [2024-07-15 16:08:17.318054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.600 [2024-07-15 16:08:17.318486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.600 [2024-07-15 16:08:17.318514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.600 [2024-07-15 16:08:17.318529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.600 [2024-07-15 16:08:17.318808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.600 [2024-07-15 16:08:17.319079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.600 [2024-07-15 16:08:17.319100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.600 [2024-07-15 16:08:17.319112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.600 [2024-07-15 16:08:17.323168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.600 [2024-07-15 16:08:17.332539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.600 [2024-07-15 16:08:17.332989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.600 [2024-07-15 16:08:17.333017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.600 [2024-07-15 16:08:17.333032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.600 [2024-07-15 16:08:17.333311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.600 [2024-07-15 16:08:17.333578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.600 [2024-07-15 16:08:17.333598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.600 [2024-07-15 16:08:17.333612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.600 [2024-07-15 16:08:17.337648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.600 [2024-07-15 16:08:17.347006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.600 [2024-07-15 16:08:17.347424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.600 [2024-07-15 16:08:17.347451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.600 [2024-07-15 16:08:17.347467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.600 [2024-07-15 16:08:17.347747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.600 [2024-07-15 16:08:17.348018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.600 [2024-07-15 16:08:17.348038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.600 [2024-07-15 16:08:17.348051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.600 [2024-07-15 16:08:17.352113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.600 [2024-07-15 16:08:17.361479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.600 [2024-07-15 16:08:17.361925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.600 [2024-07-15 16:08:17.361953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.600 [2024-07-15 16:08:17.361969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.600 [2024-07-15 16:08:17.362249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.600 [2024-07-15 16:08:17.362512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.600 [2024-07-15 16:08:17.362532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.600 [2024-07-15 16:08:17.362545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.600 [2024-07-15 16:08:17.366582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.600 [2024-07-15 16:08:17.375900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.600 [2024-07-15 16:08:17.376304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.600 [2024-07-15 16:08:17.376332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.600 [2024-07-15 16:08:17.376347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.600 [2024-07-15 16:08:17.376626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.600 [2024-07-15 16:08:17.376896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.600 [2024-07-15 16:08:17.376917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.600 [2024-07-15 16:08:17.376930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.600 [2024-07-15 16:08:17.380982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.600 [2024-07-15 16:08:17.390366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.600 [2024-07-15 16:08:17.390800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.600 [2024-07-15 16:08:17.390827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.600 [2024-07-15 16:08:17.390842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.600 [2024-07-15 16:08:17.391116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.600 [2024-07-15 16:08:17.391395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.600 [2024-07-15 16:08:17.391417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.600 [2024-07-15 16:08:17.391429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.600 [2024-07-15 16:08:17.395504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.600 [2024-07-15 16:08:17.404842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.600 [2024-07-15 16:08:17.405318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.600 [2024-07-15 16:08:17.405346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.600 [2024-07-15 16:08:17.405361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.600 [2024-07-15 16:08:17.405640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.600 [2024-07-15 16:08:17.405929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.600 [2024-07-15 16:08:17.405951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.600 [2024-07-15 16:08:17.405965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.600 [2024-07-15 16:08:17.410019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.600 [2024-07-15 16:08:17.419364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.600 [2024-07-15 16:08:17.419811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.600 [2024-07-15 16:08:17.419839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.600 [2024-07-15 16:08:17.419854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.600 [2024-07-15 16:08:17.420130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.600 [2024-07-15 16:08:17.420410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.600 [2024-07-15 16:08:17.420431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.601 [2024-07-15 16:08:17.420444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.601 [2024-07-15 16:08:17.424519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.601 [2024-07-15 16:08:17.433827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.601 [2024-07-15 16:08:17.434273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-07-15 16:08:17.434301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-07-15 16:08:17.434321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.601 [2024-07-15 16:08:17.434603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.601 [2024-07-15 16:08:17.434864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.601 [2024-07-15 16:08:17.434892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.601 [2024-07-15 16:08:17.434907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.601 [2024-07-15 16:08:17.438958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.601 [2024-07-15 16:08:17.448282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.601 [2024-07-15 16:08:17.448716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-07-15 16:08:17.448744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-07-15 16:08:17.448759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.601 [2024-07-15 16:08:17.449034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.601 [2024-07-15 16:08:17.449316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.601 [2024-07-15 16:08:17.449351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.601 [2024-07-15 16:08:17.449366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.601 [2024-07-15 16:08:17.453464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.601 [2024-07-15 16:08:17.462795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.601 [2024-07-15 16:08:17.463261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-07-15 16:08:17.463289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-07-15 16:08:17.463305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.601 [2024-07-15 16:08:17.463571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.601 [2024-07-15 16:08:17.463841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.601 [2024-07-15 16:08:17.463862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.601 [2024-07-15 16:08:17.463884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.601 [2024-07-15 16:08:17.468166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.601 [2024-07-15 16:08:17.477202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.601 [2024-07-15 16:08:17.477640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-07-15 16:08:17.477668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-07-15 16:08:17.477684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.601 [2024-07-15 16:08:17.477959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.601 [2024-07-15 16:08:17.478244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.601 [2024-07-15 16:08:17.478269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.601 [2024-07-15 16:08:17.478283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.601 [2024-07-15 16:08:17.482356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.601 [2024-07-15 16:08:17.491845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.601 [2024-07-15 16:08:17.492317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-07-15 16:08:17.492346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-07-15 16:08:17.492361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.601 [2024-07-15 16:08:17.492642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.601 [2024-07-15 16:08:17.492915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.601 [2024-07-15 16:08:17.492936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.601 [2024-07-15 16:08:17.492949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.601 [2024-07-15 16:08:17.497008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.601 [2024-07-15 16:08:17.506337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.601 [2024-07-15 16:08:17.506780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-07-15 16:08:17.506809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-07-15 16:08:17.506824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.601 [2024-07-15 16:08:17.507099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.601 [2024-07-15 16:08:17.507379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.601 [2024-07-15 16:08:17.507400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.601 [2024-07-15 16:08:17.507413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.601 [2024-07-15 16:08:17.511487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.601 [2024-07-15 16:08:17.520829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.601 [2024-07-15 16:08:17.521288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-07-15 16:08:17.521316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-07-15 16:08:17.521332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.601 [2024-07-15 16:08:17.521612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.601 [2024-07-15 16:08:17.521883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.601 [2024-07-15 16:08:17.521905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.601 [2024-07-15 16:08:17.521918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.601 [2024-07-15 16:08:17.526017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.860 [2024-07-15 16:08:17.535532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.860 [2024-07-15 16:08:17.535988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.860 [2024-07-15 16:08:17.536016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.860 [2024-07-15 16:08:17.536031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.860 [2024-07-15 16:08:17.536312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.860 [2024-07-15 16:08:17.536574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.860 [2024-07-15 16:08:17.536595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.860 [2024-07-15 16:08:17.536608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.860 [2024-07-15 16:08:17.540651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.860 [2024-07-15 16:08:17.550053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.860 [2024-07-15 16:08:17.550505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.860 [2024-07-15 16:08:17.550532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.860 [2024-07-15 16:08:17.550548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.860 [2024-07-15 16:08:17.550829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.860 [2024-07-15 16:08:17.551123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.860 [2024-07-15 16:08:17.551145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.860 [2024-07-15 16:08:17.551159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.860 [2024-07-15 16:08:17.555250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.860 [2024-07-15 16:08:17.564602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.860 [2024-07-15 16:08:17.565048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.860 [2024-07-15 16:08:17.565076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.860 [2024-07-15 16:08:17.565091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.860 [2024-07-15 16:08:17.565371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.860 [2024-07-15 16:08:17.565634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.860 [2024-07-15 16:08:17.565654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.860 [2024-07-15 16:08:17.565668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.860 [2024-07-15 16:08:17.569706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.860 [2024-07-15 16:08:17.579038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.860 [2024-07-15 16:08:17.579461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.860 [2024-07-15 16:08:17.579490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.860 [2024-07-15 16:08:17.579506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.860 [2024-07-15 16:08:17.579790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.860 [2024-07-15 16:08:17.580085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.860 [2024-07-15 16:08:17.580108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.860 [2024-07-15 16:08:17.580122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.860 [2024-07-15 16:08:17.584199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.860 [2024-07-15 16:08:17.593553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.860 [2024-07-15 16:08:17.593997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.860 [2024-07-15 16:08:17.594025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.860 [2024-07-15 16:08:17.594041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.860 [2024-07-15 16:08:17.594320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.860 [2024-07-15 16:08:17.594583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.860 [2024-07-15 16:08:17.594603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.860 [2024-07-15 16:08:17.594617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.860 [2024-07-15 16:08:17.598610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.860 [2024-07-15 16:08:17.607995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.860 [2024-07-15 16:08:17.608445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.860 [2024-07-15 16:08:17.608472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.860 [2024-07-15 16:08:17.608487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.860 [2024-07-15 16:08:17.608768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.860 [2024-07-15 16:08:17.609062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.860 [2024-07-15 16:08:17.609084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.860 [2024-07-15 16:08:17.609097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.860 [2024-07-15 16:08:17.613162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.860 [2024-07-15 16:08:17.622525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.860 [2024-07-15 16:08:17.622964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.860 [2024-07-15 16:08:17.622992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.860 [2024-07-15 16:08:17.623008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.860 [2024-07-15 16:08:17.623289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.860 [2024-07-15 16:08:17.623551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.860 [2024-07-15 16:08:17.623571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.860 [2024-07-15 16:08:17.623588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.860 [2024-07-15 16:08:17.627630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.860 [2024-07-15 16:08:17.636988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.860 [2024-07-15 16:08:17.637507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.860 [2024-07-15 16:08:17.637534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.860 [2024-07-15 16:08:17.637551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.860 [2024-07-15 16:08:17.637831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.860 [2024-07-15 16:08:17.638126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.860 [2024-07-15 16:08:17.638149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.860 [2024-07-15 16:08:17.638162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.860 [2024-07-15 16:08:17.642258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.860 [2024-07-15 16:08:17.651431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.860 [2024-07-15 16:08:17.651860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.860 [2024-07-15 16:08:17.651900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.860 [2024-07-15 16:08:17.651917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.860 [2024-07-15 16:08:17.652197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.860 [2024-07-15 16:08:17.652459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.860 [2024-07-15 16:08:17.652480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.860 [2024-07-15 16:08:17.652494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.860 [2024-07-15 16:08:17.656556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.860 [2024-07-15 16:08:17.665890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.860 [2024-07-15 16:08:17.666350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.860 [2024-07-15 16:08:17.666378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.860 [2024-07-15 16:08:17.666393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.860 [2024-07-15 16:08:17.666672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.860 [2024-07-15 16:08:17.666944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.860 [2024-07-15 16:08:17.666964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.860 [2024-07-15 16:08:17.666977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.860 [2024-07-15 16:08:17.671050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.861 [2024-07-15 16:08:17.680402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.861 [2024-07-15 16:08:17.680955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.861 [2024-07-15 16:08:17.680988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.861 [2024-07-15 16:08:17.681004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.861 [2024-07-15 16:08:17.681280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.861 [2024-07-15 16:08:17.681542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.861 [2024-07-15 16:08:17.681562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.861 [2024-07-15 16:08:17.681574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.861 [2024-07-15 16:08:17.685666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.861 [2024-07-15 16:08:17.694755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.861 [2024-07-15 16:08:17.695184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.861 [2024-07-15 16:08:17.695212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.861 [2024-07-15 16:08:17.695227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.861 [2024-07-15 16:08:17.695508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.861 [2024-07-15 16:08:17.695769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.861 [2024-07-15 16:08:17.695790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.861 [2024-07-15 16:08:17.695803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.861 [2024-07-15 16:08:17.699961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.861 [2024-07-15 16:08:17.704784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.861 [2024-07-15 16:08:17.709436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.861 [2024-07-15 16:08:17.709888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.861 [2024-07-15 16:08:17.709917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.861 [2024-07-15 16:08:17.709932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.861 [2024-07-15 16:08:17.710199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.861 [2024-07-15 16:08:17.710479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.861 [2024-07-15 16:08:17.710504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.861 [2024-07-15 16:08:17.710518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.861 [2024-07-15 16:08:17.714645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.861 [2024-07-15 16:08:17.723803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.861 [2024-07-15 16:08:17.724271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.861 [2024-07-15 16:08:17.724299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.861 [2024-07-15 16:08:17.724314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.861 [2024-07-15 16:08:17.724608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.861 [2024-07-15 16:08:17.724888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.861 [2024-07-15 16:08:17.724910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.861 [2024-07-15 16:08:17.724924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.861 [2024-07-15 16:08:17.728997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.861 [2024-07-15 16:08:17.738346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.861 [2024-07-15 16:08:17.738951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.861 [2024-07-15 16:08:17.738988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.861 [2024-07-15 16:08:17.739008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.861 [2024-07-15 16:08:17.739295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.861 [2024-07-15 16:08:17.739563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.861 [2024-07-15 16:08:17.739584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.861 [2024-07-15 16:08:17.739601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.861 [2024-07-15 16:08:17.743664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.861 Malloc0 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.861 [2024-07-15 16:08:17.752975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.861 [2024-07-15 16:08:17.753573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.861 [2024-07-15 16:08:17.753605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.861 [2024-07-15 16:08:17.753622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.861 [2024-07-15 16:08:17.753936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.861 [2024-07-15 16:08:17.754233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.861 [2024-07-15 16:08:17.754255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.861 [2024-07-15 16:08:17.754269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.861 [2024-07-15 16:08:17.758459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.861 [2024-07-15 16:08:17.767566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.861 [2024-07-15 16:08:17.768027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.861 [2024-07-15 16:08:17.768056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6ac0 with addr=10.0.0.2, port=4420 00:26:50.861 [2024-07-15 16:08:17.768071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6ac0 is same with the state(5) to be set 00:26:50.861 [2024-07-15 16:08:17.768352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6ac0 (9): Bad file descriptor 00:26:50.861 [2024-07-15 16:08:17.768615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.861 [2024-07-15 16:08:17.768635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.861 [2024-07-15 16:08:17.768648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.861 [2024-07-15 16:08:17.769344] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.861 [2024-07-15 16:08:17.772888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.861 16:08:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1258795 00:26:50.861 [2024-07-15 16:08:17.782015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.157 [2024-07-15 16:08:17.911718] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:59.321 00:26:59.321 Latency(us) 00:26:59.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.321 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:59.321 Verification LBA range: start 0x0 length 0x4000 00:26:59.321 Nvme1n1 : 15.01 6273.07 24.50 8693.53 0.00 8524.09 1019.45 20486.07 00:26:59.321 =================================================================================================================== 00:26:59.321 Total : 6273.07 24.50 8693.53 0.00 8524.09 1019.45 20486.07 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:59.580 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:59.839 rmmod nvme_tcp 00:26:59.839 rmmod nvme_fabrics 00:26:59.839 rmmod nvme_keyring 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1259552 ']' 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1259552 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1259552 ']' 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1259552 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1259552 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1259552' 00:26:59.839 killing process with pid 1259552 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1259552 00:26:59.839 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1259552 00:27:00.098 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:00.098 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:00.098 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:00.098 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.098 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:00.098 16:08:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.098 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:00.098 16:08:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.650 16:08:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:02.650 00:27:02.650 real 0m23.397s 00:27:02.650 user 1m3.856s 00:27:02.650 sys 0m4.167s 00:27:02.650 16:08:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:02.650 16:08:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.650 ************************************ 00:27:02.650 END TEST nvmf_bdevperf 00:27:02.650 ************************************ 00:27:02.650 16:08:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:02.650 16:08:28 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:02.650 16:08:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:02.650 16:08:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.650 16:08:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:02.650 ************************************ 00:27:02.650 START TEST nvmf_target_disconnect 00:27:02.650 ************************************ 00:27:02.650 16:08:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:02.650 * Looking for test storage... 00:27:02.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:02.650 16:08:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.650 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:02.650 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.650 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.650 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.650 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.650 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.650 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.650 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.650 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:02.651 16:08:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:04.554 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:04.554 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:04.554 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.554 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:04.555 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.555 16:08:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:04.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:27:04.555 00:27:04.555 --- 10.0.0.2 ping statistics --- 00:27:04.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.555 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:27:04.555 00:27:04.555 --- 10.0.0.1 ping statistics --- 00:27:04.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.555 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:04.555 ************************************ 00:27:04.555 START TEST nvmf_target_disconnect_tc1 00:27:04.555 ************************************ 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.555 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.555 [2024-07-15 16:08:31.236101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.555 [2024-07-15 16:08:31.236173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198e1a0 with addr=10.0.0.2, port=4420 00:27:04.555 [2024-07-15 16:08:31.236215] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:04.555 [2024-07-15 16:08:31.236242] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:04.555 [2024-07-15 16:08:31.236257] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:04.555 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:04.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:04.555 Initializing NVMe Controllers 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:04.555 00:27:04.555 real 0m0.098s 00:27:04.555 user 0m0.044s 00:27:04.555 sys 0m0.053s 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:04.555 ************************************ 00:27:04.555 END TEST nvmf_target_disconnect_tc1 00:27:04.555 ************************************ 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:04.555 ************************************ 00:27:04.555 START TEST nvmf_target_disconnect_tc2 00:27:04.555 ************************************ 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1262655 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1262655 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1262655 ']' 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.555 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.556 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.556 [2024-07-15 16:08:31.343865] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:04.556 [2024-07-15 16:08:31.343976] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.556 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.556 [2024-07-15 16:08:31.408593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:04.813 [2024-07-15 16:08:31.519074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.813 [2024-07-15 16:08:31.519129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.813 [2024-07-15 16:08:31.519159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.814 [2024-07-15 16:08:31.519177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.814 [2024-07-15 16:08:31.519187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.814 [2024-07-15 16:08:31.519313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:04.814 [2024-07-15 16:08:31.519378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:04.814 [2024-07-15 16:08:31.519447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:04.814 [2024-07-15 16:08:31.519450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.814 Malloc0 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.814 [2024-07-15 16:08:31.683716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.814 [2024-07-15 16:08:31.711997] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1262734 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.814 16:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:05.073 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.988 16:08:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1262655 00:27:06.988 16:08:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 [2024-07-15 16:08:33.736670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 [2024-07-15 16:08:33.736999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Read completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.988 Write completed with error (sct=0, sc=8) 00:27:06.988 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 [2024-07-15 16:08:33.737313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Read completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 Write completed with error (sct=0, sc=8) 00:27:06.989 starting I/O failed 00:27:06.989 [2024-07-15 16:08:33.737625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.989 [2024-07-15 16:08:33.737833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.737873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.738037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.738064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.738211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.738237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.738414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.738449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.738580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.738605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.738765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.738790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.739032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.739057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.739191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.739217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.739379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.739404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.739557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.739582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.739742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.739768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.739948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.739974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.740111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.740136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.740371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.740396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.740589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.740636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.740819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.740844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.740989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.741016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.741161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.741187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.741345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.741371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.741504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.741546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.741724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.741754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.741901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.741927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.742080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.742105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.989 [2024-07-15 16:08:33.742274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.989 [2024-07-15 16:08:33.742299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.989 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.742451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.742476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.742641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.742666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.742828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.742853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.743004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.743044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.743209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.743240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.743422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.743452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.743655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.743687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.743825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.743850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.744000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.744027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.744165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.744190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.744325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.744351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.744576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.744601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.744760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.744786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.744926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.744952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.745089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.745114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.745269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.745308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.745468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.745492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.745829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.745856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.746086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.746112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.746248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.746273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.746441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.746466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.746635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.746661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.746817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.746843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.747038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.747079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.747258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.747297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.747490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.747520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.747761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.747813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.748001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.748027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.748193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.748219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.748377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.748418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.748639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.748664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.748824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.748849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.748985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.749010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.749174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.749205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.749403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.749433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.749613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.749656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.749840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.749866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.750062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.750087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.750245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.750270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.750428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.990 [2024-07-15 16:08:33.750453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.990 qpair failed and we were unable to recover it. 00:27:06.990 [2024-07-15 16:08:33.750590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.750617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.750775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.750817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.751001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.751026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.751200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.751229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.751410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.751438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.751612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.751640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.751784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.751812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.751999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.752025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.752163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.752190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.752357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.752383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.752568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.752593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.752775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.752800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.752966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.752992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.753178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.753204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.753359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.753384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.753537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.753565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.753743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.753771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.753948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.753973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.754167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.754195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.754345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.754373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.754642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.754696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.754899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.754942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.755101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.755126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.755297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.755322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.755458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.755484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.755636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.755661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.755849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.755882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.756057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.756082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.756279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.756307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.756486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.756578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.756789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.756815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.756955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.756980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.757140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.757165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.757354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.757388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.757566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.757594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.757767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.757812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.757975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.758001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.758133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.758174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.758356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.758381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.758531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.758556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.991 [2024-07-15 16:08:33.758768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.991 [2024-07-15 16:08:33.758796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.991 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.758943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.758969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.759142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.759183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.759363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.759391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.759592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.759620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.759785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.759812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.759996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.760022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.760214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.760240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.760369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.760396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.760552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.760577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.760768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.760798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.760967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.760992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.761170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.761197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.761399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.761424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.761575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.761603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.761773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.761801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.761993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.762019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.762193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.762219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.762434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.762460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.762645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.762671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.762825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.762853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.763002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.763028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.763220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.763245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.763409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.763438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.763653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.763678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.763836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.763861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.764022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.764050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.764256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.764284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.764468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.764493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.764684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.764712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.992 [2024-07-15 16:08:33.764898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.992 [2024-07-15 16:08:33.764927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.992 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.765118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.765143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.765297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.765322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.765497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.765530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.765708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.765734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.765894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.765920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.766077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.766102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.766243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.766268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.766441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.766471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.766692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.766717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.766880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.766906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.767087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.767116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.767308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.767334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.767491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.767516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.767724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.767751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.767923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.767952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.768106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.768131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.768285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.768310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.768467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.768511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.768686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.768711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.768893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.768922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.769075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.769103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.769283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.769308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.769460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.769488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.769654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.769682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.769900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.769926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.770052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.770094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.770309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.770334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.770490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.770515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.770644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.770669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.770863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.770893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.771095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.771120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.771297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.771326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.771504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.771532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.771686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.771711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.771873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.771903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.772071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.772096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.772253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.772278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.772404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.772430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.772609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.772637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.772852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.772881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.993 [2024-07-15 16:08:33.773083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.993 [2024-07-15 16:08:33.773111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.993 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.773298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.773323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.773477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.773506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.773683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.773711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.773899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.773925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.774083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.774108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.774283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.774311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.774519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.774547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.774732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.774757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.774936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.774966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.775136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.775164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.775340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.775367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.775546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.775575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.775751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.775779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.775933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.775959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.776096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.776137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.776320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.776346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.776512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.776537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.776710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.776738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.776892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.776921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.777128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.777153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.777330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.777358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.777531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.777559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.777715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.777739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.777905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.777933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.778111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.778138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.778340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.778365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.778548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.778603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.778748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.778777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.778992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.779018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.779210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.779238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.779415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.779443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.779630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.779655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.779798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.779827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.780041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.780067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.780190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.780215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.780367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.780413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.780556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.780585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.780797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.780822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.780981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.781007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.781166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.781191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.994 qpair failed and we were unable to recover it. 00:27:06.994 [2024-07-15 16:08:33.781341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.994 [2024-07-15 16:08:33.781366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.781523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.781552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.781676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.781718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.781900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.781927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.782083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.782108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.782314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.782340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.782503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.782528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.782680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.782722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.782892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.782918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.783058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.783084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.783242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.783283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.783485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.783513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.783658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.783683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.783843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.783893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.784052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.784077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.784242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.784267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.784423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.784448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.784595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.784623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.784832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.784858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.785083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.785109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.785286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.785314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.785471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.785496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.785657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.785682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.785818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.785843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.786007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.786032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.786185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.786213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.786379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.786406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.786556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.786582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.786717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.786759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.786979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.787005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.787142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.787168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.787302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.787327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.787487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.787512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.787694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.787719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.787905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.787930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.788088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.788113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.788286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.788311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.788505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.788529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.788664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.788689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.788809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.788834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.788968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.995 [2024-07-15 16:08:33.788993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.995 qpair failed and we were unable to recover it. 00:27:06.995 [2024-07-15 16:08:33.789144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.789177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.789338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.789364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.789529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.789555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.789708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.789733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.789867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.789897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.790055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.790080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.790206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.790233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.790370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.790395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.790548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.790573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.790719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.790747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.790966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.790991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.791184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.791210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.791364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.791407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.791583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.791610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.791761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.791789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.791960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.791988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.792171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.792196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.792332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.792358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.792524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.792549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.792734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.792759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.792921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.792946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.793153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.793181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.793358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.793383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.793525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.793549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.793676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.793701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.793902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.793948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.794107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.794132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.794331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.794359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.794564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.794589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.794735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.794763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.794956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.794982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.795115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.795140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.795294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.795335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.795552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.795577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.795732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.795758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.795908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.795937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.796107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.796136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.796310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.796336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.796485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.796514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.796728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.796753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.996 [2024-07-15 16:08:33.796913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.996 [2024-07-15 16:08:33.796943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.996 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.797069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.797111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.797269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.797299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.797484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.797510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.797656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.797685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.797826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.797854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.798064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.798090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.798300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.798329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.798524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.798550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.798722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.798750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.798893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.798921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.799074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.799101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.799244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.799269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.799432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.799457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.799626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.799651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.799780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.799805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.800016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.800045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.800254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.800280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.800437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.800462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.800601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.800644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.800827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.800852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.801046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.801071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.801270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.801295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.801425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.801450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.801580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.997 [2024-07-15 16:08:33.801607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.997 qpair failed and we were unable to recover it. 00:27:06.997 [2024-07-15 16:08:33.801743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.801784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.801933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.801962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.802167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.802193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.802337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.802365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.802504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.802532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.802712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.802737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.802892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.802917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.803085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.803113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.803268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.803293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.803411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.803435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.803598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.803625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.803828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.803853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.803984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.804009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.804208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.804232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.804360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.804385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.804575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.804604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.804741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.804766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.804934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.804961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.805124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.805149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.805312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.805337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.805491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.805516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.805702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.805728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.805903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.805933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.806119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.806143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.806271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.806316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.806498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.806526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.806697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.806726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.806885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.806911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.807034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.807059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.807219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.807244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.807429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.807457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.807601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.807628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.807804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.807830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.808044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.808072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.808213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.808240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.808419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.808444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.808618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.808646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.808798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.808826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.809010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.809036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.809213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.809241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.998 [2024-07-15 16:08:33.809413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.998 [2024-07-15 16:08:33.809441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.998 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.809642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.809667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.809807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.809833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.809988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.810014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.810149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.810174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.810310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.810337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.810558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.810583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.810739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.810764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.810922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.810950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.811134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.811161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.811299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.811324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.811477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.811501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.811671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.811699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.811880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.811906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.812050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.812079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.812283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.812315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.812517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.812543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.812727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.812756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.812924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.812953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.813104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.813129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.813314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.813339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.813522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.813550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.813747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.813775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.813937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.813963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.814090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.814116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.814276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.814301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.814444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.814474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.814684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.814712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.814888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.814913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.815039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.815082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.815284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.815311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.815457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.815483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.815663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.815691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.815869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.815902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.816085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.816111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.816299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.816327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.816508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.816535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.816716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.816741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.816934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.816980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.817181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.817210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.817393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.999 [2024-07-15 16:08:33.817419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:06.999 qpair failed and we were unable to recover it. 00:27:06.999 [2024-07-15 16:08:33.817560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.817585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.817751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.817776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.817963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.817988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.818177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.818206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.818378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.818406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.818586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.818611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.818816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.818844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.819018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.819045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.819207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.819233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.819381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.819410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.819606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.819634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.819782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.819807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.819951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.819993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.820173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.820202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.820383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.820408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.820587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.820613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.820735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.820760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.820932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.820958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.821095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.821120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.821311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.821339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.821522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.821547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.821684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.821710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.821864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.821898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.822077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.822103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.822281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.822308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.822477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.822504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.822673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.822698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.822884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.822912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.823095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.823123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.823285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.823311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.823467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.823492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.823647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.823675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.823855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.823885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.824041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.824071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.824276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.824304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.824485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.824510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.824691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.824719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.824860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.824900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.825081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.825107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.825255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.825284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.000 [2024-07-15 16:08:33.825488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-07-15 16:08:33.825516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.000 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.825672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.825702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.825889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.825919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.826119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.826147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.826324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.826349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.826501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.826529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.826708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.826736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.826922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.826947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.827111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.827136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.827345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.827373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.827527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.827553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.827729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.827758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.827927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.827955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.828129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.828155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.828301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.828330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.828513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.828541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.828722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.828747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.828895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.828925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.829126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.829153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.829335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.829360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.829546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.829574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.829719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.829746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.829933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.829958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.830089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.830114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.830308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.830334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.830465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.830491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.830666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.830693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.830896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.830924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.831118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.831143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.831278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.831303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.831461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.831486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.831642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.831668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.831852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.831885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.001 qpair failed and we were unable to recover it. 00:27:07.001 [2024-07-15 16:08:33.832035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.001 [2024-07-15 16:08:33.832063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.832248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.832273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.832422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.832451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.832624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.832652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.832801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.832826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.832990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.833016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.833143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.833168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.833350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.833375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.833517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.833550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.833729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.833757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.833941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.833967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.834107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.834133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.834341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.834369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.834547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.834573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.834724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.834753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.834930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.834959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.835140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.835166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.835312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.835341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.835545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.835573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.835751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.835776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.835940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.835984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.836161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.836200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.836362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.836387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.836561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.836589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.836766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.836806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.836992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.837019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.837176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.837204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.837380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.837408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.837578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.837604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.837787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.837814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.837993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.838022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.838178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.838204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.838406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.838434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.838573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.838601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.838782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.838807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.838967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.838995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.839201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.839226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.839406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.839430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.839590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.839618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.839763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.839791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.839974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.002 [2024-07-15 16:08:33.839999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.002 qpair failed and we were unable to recover it. 00:27:07.002 [2024-07-15 16:08:33.840163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.840196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.840379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.840404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.840592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.840617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.840828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.840856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.841030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.841058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.841224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.841249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.841430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.841458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.841644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.841677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.841829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.841855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.842040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.842070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.842223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.842251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.842434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.842459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.842641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.842671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.842903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.842932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.843085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.843111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.843268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.843311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.843478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.843507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.843709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.843734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.843919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.843948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.844093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.844121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.844273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.844300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.844470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.844499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.844678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.844706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.844902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.844945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.845109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.845135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.845333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.845361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.845547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.845573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.845728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.845756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.845955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.845980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.846111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.846137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.846325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.846353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.846529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.846557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.846733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.846758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.846941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.846971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.847125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.847154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.847304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.847329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.847467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.847508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.847646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.847674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.847862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.847902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.848037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.848064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-07-15 16:08:33.848218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.003 [2024-07-15 16:08:33.848246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.848456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.848481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.848618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.848643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.848806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.848849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.849054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.849079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.849233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.849261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.849434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.849462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.849620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.849650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.849815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.849840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.850050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.850076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.850239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.850264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.850409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.850438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.850609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.850637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.850795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.850820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.850987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.851012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.851223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.851261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.851468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.851493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.851702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.851730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.851864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.851899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.852077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.852103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.852288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.852315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.852475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.852502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.852663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.852689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.852863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.852899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.853175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.853221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.853413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.853439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.853595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.853624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.853776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.853804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.854017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.854043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.854198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.854233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.854491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.854543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.854703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.854728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.854886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.854912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.855077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.855102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.855254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.855293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.855457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.855484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.855654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.855712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.855900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.855927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.856089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.856115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.856283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.856309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.856494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.856545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-07-15 16:08:33.856713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.004 [2024-07-15 16:08:33.856757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.856929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.856956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.857168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.857197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.857374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.857403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.857586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.857636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.857805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.857833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.858030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.858056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.858228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.858271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.858453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.858518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.858697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.858725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.858951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.858978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.859114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.859139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.859328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.859356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.859507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.859534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.859708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.859736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.859932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.859958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.860116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.860141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.860346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.860372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.860523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.860551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.860728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.860756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.860915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.860942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.861077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.861103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.861261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.861289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.861492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.861520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.861687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.861714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.861883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.861927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.862111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.862137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.862290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.862318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.862507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.862535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.862707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.862735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.862910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.862951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.863110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.863136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.863332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.863357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.863562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.863590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.863847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.863875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.864062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.864087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.864256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.864284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-07-15 16:08:33.864513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.005 [2024-07-15 16:08:33.864568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.864832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.864859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.865055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.865080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.865280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.865309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.865502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.865530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.865677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.865705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.865880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.865924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.866057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.866082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.866259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.866288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.866527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.866576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.866806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.866838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.867040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.867066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.867223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.867252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.867413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.867454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.867619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.867647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.867820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.867849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.868034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.868060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.868218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.868247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.868451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.868501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.868805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.868861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.869057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.869083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.869244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.869272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.869447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.869475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.869649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.869677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.869818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.869846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.870052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.870092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.870265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.870292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.870543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.870595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.870789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.870841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.870998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.871026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.871190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.871217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.871393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.871436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.871645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.871689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.871848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.871874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.872049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.872075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.872227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.872256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.872427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.872456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.872654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.872685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.872834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.872862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.006 [2024-07-15 16:08:33.873078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.006 [2024-07-15 16:08:33.873104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.006 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.873371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.873399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.873578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.873606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.873765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.873790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.874035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.874061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.874222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.874250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.874401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.874429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.874682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.874734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.874889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.874933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.875072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.875097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.875226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.875251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.875453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.875481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.875663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.875691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.875899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.875927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.876059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.876084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.876267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.876295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.876589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.876648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.876820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.876848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.877025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.877051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.877214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.877239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.877419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.877447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.877621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.877651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.877829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.877857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.878032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.878057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.878212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.878240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.878447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.878476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.878626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.878655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.878901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.878945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.879111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.879136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.879293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.879321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.879475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.879503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.879709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.879737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.879884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.879913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.880065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.880090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.880244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.880269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.880511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.880539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.880689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.880717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.880899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-07-15 16:08:33.880925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.007 qpair failed and we were unable to recover it. 00:27:07.007 [2024-07-15 16:08:33.881081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.881106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.881298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.881326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.881507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.881535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.881707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.881735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.881906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.881951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.882106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.882131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.882263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.882288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.882451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.882492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.882671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.882698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.882839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.882867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.883029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.883054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.883181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.883205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.883336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.883379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.883553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.883595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.883863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.883903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.884059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.884084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.884233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.884261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.884457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.884484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.884636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.884666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.884831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.884860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.885097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.885137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.885335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.885381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.885564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.885608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.885802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.885828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.885969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.885995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.886147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.886189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.886363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.886390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.886573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.886615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.886782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.886808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.886997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.887028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.887232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.887260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.887412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.887441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.887706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.887757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.887944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.887970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.888129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.008 [2024-07-15 16:08:33.888154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.008 qpair failed and we were unable to recover it. 00:27:07.008 [2024-07-15 16:08:33.888342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.888370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.888541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.888569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.888742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.888770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.888940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.888965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.889145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.889170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.889312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.889341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.889599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.889654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.889855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.889891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.890071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.890097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.890334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.890385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.890531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.890559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.890737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.890764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.890968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.890994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.891175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.891203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.891380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.891408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.891622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.891682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.891891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.891932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.892093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.892118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.892254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.892279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.892454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.892482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.892638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.892666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.892842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.892868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.893035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.893060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.893237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.893265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.893521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.893572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.893743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.893770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.893958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.893983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.894146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.894177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.894392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.894420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.894611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.894639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.894884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.894913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.895117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.895142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.895340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.895366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.895547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.895574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.895757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.895785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.895987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.896013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.896179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.896204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.896363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.896391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.896545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.896573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.896808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.896836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.897035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.009 [2024-07-15 16:08:33.897060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.009 qpair failed and we were unable to recover it. 00:27:07.009 [2024-07-15 16:08:33.897213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.897241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.897415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.897443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.897598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.897626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.897794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.897822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.898053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.898093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.898263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.898290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.898497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.898525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.898765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.898814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.899014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.899040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.899198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.899226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.899474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.899525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.899730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.899773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.899962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.899988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.900172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.900215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.900425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.900469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.900620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.900669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.900829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.900855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.901036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.901079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.901255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.901297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.901550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.901606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.901748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.901774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.901965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.902010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.902191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.902235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.902400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.902442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.010 [2024-07-15 16:08:33.902624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.010 [2024-07-15 16:08:33.902671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.010 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.902855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.902886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.903049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.903094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.903281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.903323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.903591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.903635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.903821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.903851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.904037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.904064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.904241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.904269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.904445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.904473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.904654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.904683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.904855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.904887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.905024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.905049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.905222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.905249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.905418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.905446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.905644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.905672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.011 [2024-07-15 16:08:33.905842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.011 [2024-07-15 16:08:33.905870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.011 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.906049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.906076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.906261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.906289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.906459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.906487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.906777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.906834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.907034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.907060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.907247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.907277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.907466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.907522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.907702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.907730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.907921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.907948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.908112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.908137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.908329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.908357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.908506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.908534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.908744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.908772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.908914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.908955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.909134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.909175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.909338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.909363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.909546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.909574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.909752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.909780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.909946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.293 [2024-07-15 16:08:33.909972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.293 qpair failed and we were unable to recover it. 00:27:07.293 [2024-07-15 16:08:33.910166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.910194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.910354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.910382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.910592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.910644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.910824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.910852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.911019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.911044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.911203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.911228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.911402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.911430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.911594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.911622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.911828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.911856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.912019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.912045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.912225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.912253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.912432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.912461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.912660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.912687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.912889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.912935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.913109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.913138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.913363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.913391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.913540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.913568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.913802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.913830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.914026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.914052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.914212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.914238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.914394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.914422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.914602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.914632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.914782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.914810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.914988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.915014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.915148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.915173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.915335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.915360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.915615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.915642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.915851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.915885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.916050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.916075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.916200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.916225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.916406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.916434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.916580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.916608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.916812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.916839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.917023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.917048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.917206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.917234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.917406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.917434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.917631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.917659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.917796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.917824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.917990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.918016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.294 [2024-07-15 16:08:33.918193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.294 [2024-07-15 16:08:33.918221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.294 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.918357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.918385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.918580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.918607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.918818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.918846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.919021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.919047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.919175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.919200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.919407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.919435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.919623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.919664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.919808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.919833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.920010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.920036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.920220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.920247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.920445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.920473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.920644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.920674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.920887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.920950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.921093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.921120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.921267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.921296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.921475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.921507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.921665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.921706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.921883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.921912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.922081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.922106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.922264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.922289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.922475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.922503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.922708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.922736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.922912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.922938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.923120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.923145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.923340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.923368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.923629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.923691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.923887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.923915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.924088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.924116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.924360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.924406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.924612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.924640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.924810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.924837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.925068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.925124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.295 [2024-07-15 16:08:33.925315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.295 [2024-07-15 16:08:33.925342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.295 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.925492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.925535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.925713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.925757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.925942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.925969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.926110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.926135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.926350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.926378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.926578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.926622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.926764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.926791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.926956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.926984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.927140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.927184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.927358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.927387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.927585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.927613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.927811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.927839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.928003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.928029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.928233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.928261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.928433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.928461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.928606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.928634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.928821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.928846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.929012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.929038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.929200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.929241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.929438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.929466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.929637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.929665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.929864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.929897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.930105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.930130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.930295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.930337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.930496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.930536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.930811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.930839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.931030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.931056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.931242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.931270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.931412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.931440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.931616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.931644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.931814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.931842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.932044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.932070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.932255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.932282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.932429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.932457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.932655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.932683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.932827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.932855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.933054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.933085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.933244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.933269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.933478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.933506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.933652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.296 [2024-07-15 16:08:33.933680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.296 qpair failed and we were unable to recover it. 00:27:07.296 [2024-07-15 16:08:33.933832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.933857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.934023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.934048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.934233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.934261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.934603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.934653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.934868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.934902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.935081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.935106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.935288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.935313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.935492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.935519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.935659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.935687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.935863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.935894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.936061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.936087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.936267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.936296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.936540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.936591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.936730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.936758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.936943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.936969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.937132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.937157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.937343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.937367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.937554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.937582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.937762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.937790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.937985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.938011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.938142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.938184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.938329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.938353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.938517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.938542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.938699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.938727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.938885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.938911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.939072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.939097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.939301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.939329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.939575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.939622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.939804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.939831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.940052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.940077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.940209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.940234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.940409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.940437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.940633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.940660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.940827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.940855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.941041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.941066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.941227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.941252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.941427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.941453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.941675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.941703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.941885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.941928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.942087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.942112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.297 [2024-07-15 16:08:33.942326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.297 [2024-07-15 16:08:33.942354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.297 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.942507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.942537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.942742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.942770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.942961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.942987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.943166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.943194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.943382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.943407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.943611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.943639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.943786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.943814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.943980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.944006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.944174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.944216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.944361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.944389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.944593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.944621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.944758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.944786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.944964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.944990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.945163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.945188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.945371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.945399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.945574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.945604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.945806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.945833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.946017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.946043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.946245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.946273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.946424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.946449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.946620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.946648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.946790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.946817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.946964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.946990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.947167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.947195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.947365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.947392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.947650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.947702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.947901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.947943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.948128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.948168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.948328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.948353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.948559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.948586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.948761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.948789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.948949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.948974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.949108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.949133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.949325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.298 [2024-07-15 16:08:33.949352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.298 qpair failed and we were unable to recover it. 00:27:07.298 [2024-07-15 16:08:33.949518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.949545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.949707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.949734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.949950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.949976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.950135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.950160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.950338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.950365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.950535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.950562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.950734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.950762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.950955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.950981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.951140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.951166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.951323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.951348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.951507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.951534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.951714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.951741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.951891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.951918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.952050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.952075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.952252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.952280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.952424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.952449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.952584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.952614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.952774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.952802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.952992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.953017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.953154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.953179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.953310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.953335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.953512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.953537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.953761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.953789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.953939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.953964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.954125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.954150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.954330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.954358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.954543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.954570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.954767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.954795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.954959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.954984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.955120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.955144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.955343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.955368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.955550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.955578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.955717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.299 [2024-07-15 16:08:33.955744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.299 qpair failed and we were unable to recover it. 00:27:07.299 [2024-07-15 16:08:33.955905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.955932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.956068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.956093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.956270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.956298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.956459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.956483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.956639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.956678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.956857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.956897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.957109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.957134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.957285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.957313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.957487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.957515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.957661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.957685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.957847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.957900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.958056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.958083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.958262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.958287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.958444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.958472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.958639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.958667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.958825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.958850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.959043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.959071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.959206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.959233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.959382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.959407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.959534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.959574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.959745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.959773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.959985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.960011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.960175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.960202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.960381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.960408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.960595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.960620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.960825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.960854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.961083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.961108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.961268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.961293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.961443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.961470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.961688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.961713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.961874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.961905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.962112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.962140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.300 qpair failed and we were unable to recover it. 00:27:07.300 [2024-07-15 16:08:33.962311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.300 [2024-07-15 16:08:33.962339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.962516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.962541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.962668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.962709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.962921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.962950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.963133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.963158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.963308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.963341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.963542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.963570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.963714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.963739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.963898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.963943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.964127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.964152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.964310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.964335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.964542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.964570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.964780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.964804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.964989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.965015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.965191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.965219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.965364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.965391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.965561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.965586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.965758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.965786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.965960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.965988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.966140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.966165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.966301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.966326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.966527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.966554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.966758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.966782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.966970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.966998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.967178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.967202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.967364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.967391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.967578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.967606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.967785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.967813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.967990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.968016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.968234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.968261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.968415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.968443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.968622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.968646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.968782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.968807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.301 [2024-07-15 16:08:33.968946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.301 [2024-07-15 16:08:33.968988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.301 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.969193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.969218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.969389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.969417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.969593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.969620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.969860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.969977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.970142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.970183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.970362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.970387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.970571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.970596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.970781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.970808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.970985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.971013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.971159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.971184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.971343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.971387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.971523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.971551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.971737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.971762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.971915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.971944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.972118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.972148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.972328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.972353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.972502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.972529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.972678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.972707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.972887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.972913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.973090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.973119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.973327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.973355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.973536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.973561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.973717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.973744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.973921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.973951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.974127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.974152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.974329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.974357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.974531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.974559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.974719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.974744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.974883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.974909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.975092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.975121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.975326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.975352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.975534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.975562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.975730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.975757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.975941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.975967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.976124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.976149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.976309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.302 [2024-07-15 16:08:33.976351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.302 qpair failed and we were unable to recover it. 00:27:07.302 [2024-07-15 16:08:33.976522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.976547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.976730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.976758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.976903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.976932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.977141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.977180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.977365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.977393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.977567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.977595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.977777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.977802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.977962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.978003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.978204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.978232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.978375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.978400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.978600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.978628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.978766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.978794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.978949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.978975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.979146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.979174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.979317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.979345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.979550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.979575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.979751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.979779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.979922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.979951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.980137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.980163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.980294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.980319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.980493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.980521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.980731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.980756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.980969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.980997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.981195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.981223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.981416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.981441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.981600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.981628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.981798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.981825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.982000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.982025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.982161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.982203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.982403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.982430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.982586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.982615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.982795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.982823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.983008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.983034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.983162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.983188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.983310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.983351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.303 [2024-07-15 16:08:33.983553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.303 [2024-07-15 16:08:33.983581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.303 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.983766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.983792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.983914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.983958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.984155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.984183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.984338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.984363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.984541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.984569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.984756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.984781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.984909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.984936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.985090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.985116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.985301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.985329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.985471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.985497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.985640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.985681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.985855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.985889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.986052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.986077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.986248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.986275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.986424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.986451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.986617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.986644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.986824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.986849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.987041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.987071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.987222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.987249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.987461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.987486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.987649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.987674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.987859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.987892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.988109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.988138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.988281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.988309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.988464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.988489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.988643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.988686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.988834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.988862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.989073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.989101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.989255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.989280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.989455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.989483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.989675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.989703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.989873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.989911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.990112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.990137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.990292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.990320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.990497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.304 [2024-07-15 16:08:33.990525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.304 qpair failed and we were unable to recover it. 00:27:07.304 [2024-07-15 16:08:33.990727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.990755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.990925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.990951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.991080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.991123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.991302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.991330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.991527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.991554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.991726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.991751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.991930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.991959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.992159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.992187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.992337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.992365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.992540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.992565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.992704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.992728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.992888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.992931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.993106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.993135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.993289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.993314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.993517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.993569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.993703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.993732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.993919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.993945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.994100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.994125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.994305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.994333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.994475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.994503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.994690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.994717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.994895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.994921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.995085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.995113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.995302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.995327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.995487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.995528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.995734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.995759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.995932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.995960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.996111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.305 [2024-07-15 16:08:33.996144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.305 qpair failed and we were unable to recover it. 00:27:07.305 [2024-07-15 16:08:33.996318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.996346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.996521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.996546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.996687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.996713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.996900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.996929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.997129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.997154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.997314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.997341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.997466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.997491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.997626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.997651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.997812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.997837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.997997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.998023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.998179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.998209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.998383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.998411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.998633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.998687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.998887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.998913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.999098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.999126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.999258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.999285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.999459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.999487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.999691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.999716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:33.999941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:33.999970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.000131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.000159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.000353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.000381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.000581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.000606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.000790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.000818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.001012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.001038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.001178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.001203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.001338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.001363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.001535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.001566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.001708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.001736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.001920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.001949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.002129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.002154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.002314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.002369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.002519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.306 [2024-07-15 16:08:34.002547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.306 qpair failed and we were unable to recover it. 00:27:07.306 [2024-07-15 16:08:34.002699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.002728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.002949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.002975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.003114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.003139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.003297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.003322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.003508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.003537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.003690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.003715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.003853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.003905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.004062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.004091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.004272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.004298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.004449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.004475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.004720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.004771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.004987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.005013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.005148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.005174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.005294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.005318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.005452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.005498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.005678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.005706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.005890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.005919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.006067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.006093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.006296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.006324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.006475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.006503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.006702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.006730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.006887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.006917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.007040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.007082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.007270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.007295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.007455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.007480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.007635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.007660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.007836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.007864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.008019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.307 [2024-07-15 16:08:34.008047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.307 qpair failed and we were unable to recover it. 00:27:07.307 [2024-07-15 16:08:34.008214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.008242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.008419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.008444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.008620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.008648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.008817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.008845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.009042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.009068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.009250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.009275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.009451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.009501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.009673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.009701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.009847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.009875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.010070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.010095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.010311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.010366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.010537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.010566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.010740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.010768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.010926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.010952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.011101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.011130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.011282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.011310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.011486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.011515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.011666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.011692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.011830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.011855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.012045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.012073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.012290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.012315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.012484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.012510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.012686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.012714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.012895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.012940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.013062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.013088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.013252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.013277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.013434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.013460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.013628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.308 [2024-07-15 16:08:34.013656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-07-15 16:08:34.013809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.013837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.013999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.014025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.014178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.014203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.014377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.014405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.014549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.014577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.014788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.014813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.015015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.015044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.015237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.015263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.015387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.015412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.015572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.015597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.015770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.015798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.015972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.016001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.016168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.016196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.016351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.016376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.016514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.016539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.016691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.016719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.016893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.016921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.017101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.017126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.017286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.017311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.017517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.017545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.017733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.017760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.017941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.017967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.018103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.018131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.018311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.018340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.018538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.018566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.018754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.018779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.018929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.018955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.019143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.019173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.019314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.019342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.019509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.019535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.019666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.019691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.019889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.309 [2024-07-15 16:08:34.019915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-07-15 16:08:34.020075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.020103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.020281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.020310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.020435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.020461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.020650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.020678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.020885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.020914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.021098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.021123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.021320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.021371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.021549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.021576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.021720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.021748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.021925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.021951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.022102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.022130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.022306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.022334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.022536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.022564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.022720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.022745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.022907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.022952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.023123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.023151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.023321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.023349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.023526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.023551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.023753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.023781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.023941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.023967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.024097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.024122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.024307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.024332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.024509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.024559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.024738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.024766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.024927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.024965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.025121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.025146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.025277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.025304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.025508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.025534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.025690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.025721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.025904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.025930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.026105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.026141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.026315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.026343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.026487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.026514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.026699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.026724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.026916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.026952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.027126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.027154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.027318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.027345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.027529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.027555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-07-15 16:08:34.027763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.310 [2024-07-15 16:08:34.027791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.027974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.028002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.028178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.028206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.028357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.028382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.028567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.028595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.028835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.028863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.029072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.029098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.029238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.029262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.029420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.029481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.029661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.029689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.029869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.029906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.030089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.030114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.030318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.030370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.030548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.030577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.030782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.030808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.030998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.031024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.031239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.031294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.031482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.031511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.031706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.031735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.031926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.031952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.032093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.032118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.032279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.032304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.032494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.032523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.032672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.032698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.032838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.032890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.033051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.033077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.033252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.033281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.033433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.033458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.033639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.033665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.033851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.033887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.034070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.034096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.034260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.034287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.034466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.034494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.034678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.034703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.311 [2024-07-15 16:08:34.034892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.311 [2024-07-15 16:08:34.034922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.311 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.035092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.035117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.035245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.035287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.035436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.035464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.035630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.035659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.035817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.035843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.035998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.036024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.036159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.036184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.036338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.036367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.036548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.036573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.036743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.036769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.036922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.036948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.037105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.037135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.037316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.037342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.037503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.037530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.037675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.037704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.037890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.037919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.038102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.038127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.038288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.038313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.038479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.038506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.038687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.038715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.038886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.038912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.039077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.039119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.039299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.039327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.039478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.039511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.039667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.039692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.039854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.039913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.040070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.040098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.040288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.040314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.040473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.040498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.040675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.040702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.040872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.040910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.041118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.041147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.041326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.041350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.041498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.041525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.041695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.041723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.041934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.041963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.042111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.042136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.042281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.042323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.042527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.042555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.312 [2024-07-15 16:08:34.042707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.312 [2024-07-15 16:08:34.042736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.312 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.042903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.042945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.043106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.043131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.043324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.043353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.043524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.043552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.043713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.043739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.043905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.043949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.044101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.044129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.044336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.044365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.044545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.044570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.044715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.044743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.044923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.044956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.045095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.045124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.045329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.045355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.045536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.045564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.045730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.045758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.045896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.045930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.046107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.046132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.046310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.046338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.046544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.046572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.046744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.046773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.046953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.046980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.047115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.047141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.047327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.047356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.047533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.047561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.047738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.047767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.047933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.047959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.048119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.048145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.048353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.048378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.048535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.048561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.048745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.048773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.048965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.048991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.049122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.049148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.049302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.049327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.049499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.049527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.049680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.049708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.049917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.049947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.050135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.050160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.050324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.050353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.050489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.050514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.313 [2024-07-15 16:08:34.050704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.313 [2024-07-15 16:08:34.050730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.313 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.050927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.050953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.051135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.051163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.051344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.051372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.051584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.051610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.051765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.051790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.051960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.051989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.052134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.052163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.052314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.052343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.052521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.052547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.052693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.052722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.052926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.052955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.053134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.053170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.053354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.053380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.053539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.053565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.053702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.053744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.053901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.053930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.054087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.054112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.054274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.054317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.054461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.054489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.054689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.054718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.054893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.054930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.055082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.055110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.055252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.055279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.055467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.055492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.055611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.055636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.055799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.055825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.055961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.055987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.056166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.056195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.056379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.056405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.056584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.056611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.056813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.056841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.057037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.057064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.057221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.057246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.057428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.057457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.057623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.057651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.057833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.057862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.058055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.058081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.058251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.058279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.058436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.058465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.058632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.314 [2024-07-15 16:08:34.058660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.314 qpair failed and we were unable to recover it. 00:27:07.314 [2024-07-15 16:08:34.058867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.058901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.059040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.059066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.059216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.059246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.059429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.059458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.059635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.059660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.059784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.059827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.060030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.060056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.060217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.060245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.060427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.060452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.060626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.060654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.060832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.060862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.061025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.061051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.061207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.061232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.061352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.061399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.061591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.061617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.061750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.061777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.061901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.061932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.062109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.062137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.062286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.062314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.062498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.062523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.062720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.062746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.062959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.062988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.063133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.063161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.063340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.063370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.063576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.063601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.063774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.063806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.063992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.064019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.064152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.064177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.064338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.064363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.064544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.064573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.064781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.064806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.064978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.065006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.065148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.065172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.065328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.065353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.065507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.315 [2024-07-15 16:08:34.065533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.315 qpair failed and we were unable to recover it. 00:27:07.315 [2024-07-15 16:08:34.065691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.065716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.065941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.065967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.066096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.066120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.066307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.066335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.066511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.066540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.066721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.066746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.066903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.066932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.067114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.067139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.067300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.067326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.067513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.067538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.067698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.067728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.067908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.067936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.068118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.068147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.068328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.068354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.068537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.068564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.068716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.068744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.068925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.068955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.069168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.069197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.069377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.069405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.069546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.069574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.069753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.069779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.069939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.069965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.070141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.070169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.070312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.070340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.070489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.070517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.070700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.070725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.070899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.070928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.071126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.071154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.071335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.071363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.071550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.071575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.071725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.071768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.071957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.071986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.072178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.072204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.072361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.072386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.072564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.072592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.072743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.072771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.072950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.072979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.073143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.073169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.073350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.316 [2024-07-15 16:08:34.073378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.316 qpair failed and we were unable to recover it. 00:27:07.316 [2024-07-15 16:08:34.073559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.073587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.073808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.073836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.074000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.074026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.074194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.074222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.074392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.074420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.074619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.074644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.074837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.074862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.075062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.075087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.075242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.075269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.075458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.075518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.075704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.075745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.075955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.075981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.076116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.076141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.076301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.076327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.076447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.076472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.076606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.076631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.076781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.076822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.076995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.077021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.077160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.077185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.077393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.077421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.077605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.077631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.077792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.077835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.078004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.078030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.078163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.078188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.078368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.078396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.078618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.078662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.078834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.078859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.079030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.079056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.079187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.079229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.079452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.079501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.079707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.079733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.079919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.079963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.080122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.080147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.080304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.080333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.080541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.080566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.080714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.080742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.080914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.080958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.081119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.081144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.081267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.081291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.081425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.081466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.317 [2024-07-15 16:08:34.081666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.317 [2024-07-15 16:08:34.081691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.317 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.081828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.081853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.082006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.082031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.082235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.082263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.082451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.082476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.082635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.082661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.082816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.082848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.082999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.083025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.083231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.083258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.083448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.083495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.083652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.083677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.083853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.083890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.084099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.084124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.084278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.084307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.084454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.084479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.084607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.084647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.084825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.084853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.085009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.085034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.085226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.085251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.085471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.085495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.085640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.085666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.085886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.085915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.086097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.086123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.086256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.086281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.086442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.086467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.086677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.086724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.086909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.086935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.087090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.087115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.087309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.087334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.087472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.087514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.087719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.087744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.087918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.087947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.088092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.088121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.088299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.088328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.088488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.088514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.088687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.088715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.088902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.088929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.089090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.089116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.089243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.089268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.089389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.089414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.318 qpair failed and we were unable to recover it. 00:27:07.318 [2024-07-15 16:08:34.089579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.318 [2024-07-15 16:08:34.089607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.089783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.089811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.089969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.089995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.090122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.090147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.090306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.090334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.090506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.090534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.090676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.090701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.090908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.090937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.091109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.091134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.091290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.091320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.091503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.091529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.091671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.091697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.091857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.091888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.092052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.092078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.092215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.092240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.092367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.092409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.092596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.092623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.092809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.092835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.092997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.093022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.093204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.093232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.093419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.093448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.093613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.093639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.093796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.093822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.093953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.093979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.094116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.094142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.094305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.094333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.094534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.094559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.094699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.094727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.094922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.094948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.095092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.095117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.095270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.095295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.095475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.095503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.095646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.095674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.095846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.095896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.096054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.096080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.096236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.319 [2024-07-15 16:08:34.096261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.319 qpair failed and we were unable to recover it. 00:27:07.319 [2024-07-15 16:08:34.096426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.096454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.096636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.096664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.096819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.096845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.097036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.097062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.097235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.097264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.097444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.097472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.097674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.097700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.097856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.097891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.098073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.098098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.098289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.098317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.098501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.098526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.098787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.098848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.099049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.099075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.099207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.099233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.099404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.099429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.099567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.099592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.099751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.099794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.100006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.100033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.100214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.100239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.100437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.100465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.100619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.100647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.100823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.100852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.101012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.101038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.101176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.101218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.101427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.101455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.101609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.101644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.101890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.101933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.102095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.102120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.102305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.102333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.102506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.102535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.102715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.102740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.102917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.102946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.103100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.103127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.103333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.103361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.103545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.103570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.103790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.103818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.103982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.104009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.104213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.104241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.104432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.320 [2024-07-15 16:08:34.104457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.320 qpair failed and we were unable to recover it. 00:27:07.320 [2024-07-15 16:08:34.104610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.104638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.104806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.104834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.105022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.105049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.105211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.105236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.105368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.105393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.105547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.105588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.105768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.105796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.105952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.105979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.106133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.106158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.106371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.106399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.106555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.106583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.106761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.106786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.106968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.106996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.107199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.107235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.107371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.107400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.107586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.107611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.107768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.107793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.107967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.107993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.108160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.108185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.108342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.108368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.108551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.108576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.108762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.108789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.108993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.109023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.109205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.109230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.109402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.109427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.109596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.109624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.109798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.109826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.110044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.110070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.110277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.110326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.110516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.110544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.110724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.110753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.110931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.110957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.111142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.111183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.111348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.111376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.111637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.111694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.111844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.111870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.112066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.112091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.112271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.112299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.112537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.112591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.112781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.112806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.112943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.321 [2024-07-15 16:08:34.112974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.321 qpair failed and we were unable to recover it. 00:27:07.321 [2024-07-15 16:08:34.113139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.113178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.113387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.113413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.113577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.113602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.113783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.113811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.113991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.114017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.114174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.114200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.114395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.114421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.114584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.114609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.114766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.114792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.114983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.115009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.115192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.115217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.115502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.115553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.115726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.115754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.115983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.116009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.116142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.116168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.116337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.116365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.116499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.116527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.116705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.116734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.116916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.116942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.117086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.117114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.117287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.117316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.117524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.117553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.117701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.117728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.117857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.117905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.118096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.118124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.118279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.118307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.118492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.118517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.118659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.118687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.118841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.118869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.119026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.119052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.119207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.119232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.119409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.119437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.119580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.119607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.119811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.119839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.120007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.120033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.120203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.120228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.120386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.120414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.120592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.322 [2024-07-15 16:08:34.120620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.322 qpair failed and we were unable to recover it. 00:27:07.322 [2024-07-15 16:08:34.120801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.120826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.120979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.121005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.121141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.121166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.121392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.121418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.121602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.121627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.121808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.121836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.122049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.122074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.122251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.122321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.122472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.122497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.122666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.122693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.122872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.122910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.123085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.123110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.123273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.123300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.123438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.123463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.123645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.123670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.123860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.123898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.124047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.124073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.124232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.124274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.124443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.124471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.124648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.124678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.124859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.124904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.125129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.125155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.125294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.125319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.125509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.125537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.125716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.125743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.125911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.125954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.126139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.126181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.126383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.126411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.126595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.126620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.126783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.126811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.127029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.127055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.127187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.127214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.127374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.127399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.127603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.127631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.127830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.127858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.128071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.128097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.128235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.128260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.128422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.128465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.128640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.128668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.128847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.128886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.323 [2024-07-15 16:08:34.129080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.323 [2024-07-15 16:08:34.129108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.323 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.129321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.129349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.129486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.129514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.129688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.129717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.129904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.129930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.130087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.130112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.130322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.130349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.130561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.130589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.130741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.130766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.130968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.130997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.131131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.131159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.131331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.131360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.131539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.131565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.131746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.131775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.131932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.131961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.132166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.132194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.132376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.132406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.132613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.132642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.132776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.132804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.132993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.133019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.133206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.133231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.133389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.133417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.133598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.133625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.133763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.133792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.134008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.134034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.134172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.134214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.134383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.134411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.134557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.134585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.134743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.134768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.324 [2024-07-15 16:08:34.134909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.324 [2024-07-15 16:08:34.134954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.324 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.135161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.135189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.135374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.135402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.135581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.135606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.135781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.135809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.135984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.136010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.136176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.136219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.136399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.136424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.136556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.136581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.136718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.136743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.136911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.136940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.137123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.137148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.137322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.137349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.137529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.137554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.137745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.137774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.137963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.137989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.138181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.138240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.138446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.138474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.138628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.138656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.138845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.138870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.139015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.139040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.139246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.139274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.139431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.139459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.139667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.139692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.139841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.139869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.140034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.140059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.140269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.140298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.140453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.140478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.140618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.140643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.140805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.140848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.141028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.141054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.141246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.141271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.141409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.141436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.141617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.141645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.325 qpair failed and we were unable to recover it. 00:27:07.325 [2024-07-15 16:08:34.141796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.325 [2024-07-15 16:08:34.141824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.142040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.142066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.142245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.142273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.142424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.142452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.142614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.142642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.142792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.142817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.142939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.142965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.143124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.143164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.143339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.143367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.143545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.143570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.143695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.143737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.143943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.143972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.144122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.144149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.144342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.144370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.144576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.144601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.144739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.144764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.144941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.144969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.145117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.145145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.145325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.145352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.145534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.145562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.145704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.145732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.145937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.145970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.146137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.146162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.146375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.146403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.146550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.146578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.146781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.146810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.146983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.147009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.147143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.147167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.147326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.147352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.147565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.147594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.147769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.147794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.147946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.147971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.148136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.148179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.148387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.148413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.148567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.148592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.148752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.148781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.148993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.326 [2024-07-15 16:08:34.149018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.326 qpair failed and we were unable to recover it. 00:27:07.326 [2024-07-15 16:08:34.149146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.149172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.149332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.149357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.149527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.149554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.149755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.149784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.149935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.149964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.150148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.150173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.150327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.150372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.150543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.150570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.150747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.150775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.150930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.150957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.151121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.151161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.151333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.151365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.151553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.151581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.151731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.151756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.151902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.151947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.152126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.152155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.152362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.152391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.152564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.152589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.152798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.152826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.152976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.153002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.153213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.153241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.153417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.153442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.153569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.153611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.153817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.153845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.154062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.154088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.154225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.154251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.154423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.154451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.154593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.154621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.154794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.154823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.155006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.155032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.155235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.155264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.155417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.155445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.155602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.155631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.155802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.327 [2024-07-15 16:08:34.155827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.327 qpair failed and we were unable to recover it. 00:27:07.327 [2024-07-15 16:08:34.155984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.156010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.156140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.156182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.156355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.156383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.156586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.156611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.156800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.156832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.157027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.157053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.157218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.157261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.157411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.157437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.157600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.157625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.157782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.157807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.158009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.158035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.158206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.158231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.158415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.158442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.158591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.158618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.158796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.158824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.159008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.159034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.159239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.159267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.159475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.159500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.159670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.159696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.159872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.159923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.160056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.160081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.160246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.160275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.160415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.160444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.160618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.160643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.160824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.160852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.161057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.161085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.161268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.161298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.161455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.161480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.161633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.161676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.161825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.161853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.162042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.162067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.162230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.162255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.328 qpair failed and we were unable to recover it. 00:27:07.328 [2024-07-15 16:08:34.162510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.328 [2024-07-15 16:08:34.162562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.162738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.162766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.162980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.163009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.163249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.163274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.163490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.163518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.163695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.163723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.163873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.163922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.164130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.164155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.164362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.164413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.164588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.164617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.164766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.164794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.164949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.164975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.165219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.165247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.165428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.165456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.165630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.165658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.165859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.165892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.166046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.166074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.166250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.166277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.166444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.166472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.166654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.166680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.166922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.166951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.167091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.167119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.167299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.167327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.167489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.167513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.167731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.167784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.167962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.167990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.168170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.168195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.168356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.168381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.168516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.168541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.168696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.168721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-07-15 16:08:34.168889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-07-15 16:08:34.168918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.169079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.169104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.169263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.169289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.169500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.169528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.169702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.169729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.169900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.169926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.170174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.170202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.170347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.170375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.170552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.170580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.170735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.170759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.170910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.170958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.171137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.171166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.171308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.171335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.171537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.171562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.171738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.171766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.171916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.171945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.172118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.172147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.172302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.172327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.172467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.172508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.172658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.172686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.172895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.172924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.173164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.173189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.173389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.173414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.173567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.173608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.173782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.173810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.173993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.174018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.174154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.174179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.174313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.174338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.174527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.174555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.174721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.174746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.174915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.174944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.175115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.175142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.175295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.175323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.175469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.175495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.175636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.175678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.175828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.175858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.176012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.176040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.176248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.176277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.176453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-07-15 16:08:34.176481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-07-15 16:08:34.176635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.176663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.176803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.176832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.176992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.177018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.177138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.177163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.177371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.177399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.177543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.177572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.177780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.177805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.177989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.178018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.178164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.178192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.178442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.178470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.178653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.178678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.178830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.178858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.179008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.179036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.179185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.179213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.179422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.179447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.179610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.179658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.179826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.179854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.180031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.180057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.180220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.180245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.180379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.180404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.180615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.180642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.180790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.180818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.181027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.181052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.181225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.181253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.181436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.181464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.181712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.181744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.181925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.181951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.182127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.182155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.182323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.182351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.182550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.182577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.182758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.182783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.182951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.182979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.183155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.183183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.183323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.183351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.183505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.183530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.183710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.183754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.183927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.183956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.184101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.184129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.184301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.184327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.184509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-07-15 16:08:34.184538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-07-15 16:08:34.184742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.184770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.184923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.184952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.185103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.185129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.185365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.185418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.185573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.185601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.185749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.185778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.185964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.185990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.186152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.186177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.186374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.186399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.186560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.186585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.186768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.186797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.186981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.187007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.187138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.187180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.187364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.187389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.187545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.187570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.187699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.187724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.187855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.187886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.188026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.188067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.188274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.188300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.188430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.188455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.188614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.188639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.188827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.188855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.189034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.189060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.189219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.189263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.189407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.189435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.189640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.189666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.189800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.189831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.189997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.190023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.190148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.190173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.190335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.190379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.190555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-07-15 16:08:34.190580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-07-15 16:08:34.190733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.190761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.190908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.190938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.191116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.191144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.191293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.191319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.191569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.191619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.191815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.191843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.192005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.192031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.192188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.192213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.192418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.192469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.192672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.192700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.192883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.192911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.193094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.193119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.193324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.193373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.193562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.193590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.193773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.193801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.194012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.194038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.194168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.194193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.194357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.194382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.194564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.194592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.194738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.194763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.194895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.194939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.195124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.195152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.195316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.195348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.195535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.195559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.195693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.195718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.195924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.195953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.196097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.196125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.196329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.196354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.196593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.196643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.196821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.196850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.197022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.197048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.197187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.197212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.197428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.197456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.197627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.197655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.197824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.197851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.198040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.198066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.198244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.198269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.198415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.198443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.198631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.198659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.198832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.198858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.199008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.199037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.199218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.199243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-07-15 16:08:34.199399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-07-15 16:08:34.199424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.334 [2024-07-15 16:08:34.199577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-07-15 16:08:34.199602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-07-15 16:08:34.199778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-07-15 16:08:34.199806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-07-15 16:08:34.199977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-07-15 16:08:34.200005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-07-15 16:08:34.200151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-07-15 16:08:34.200178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-07-15 16:08:34.200359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-07-15 16:08:34.200384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-07-15 16:08:34.200563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-07-15 16:08:34.200592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-07-15 16:08:34.200774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-07-15 16:08:34.200806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-07-15 16:08:34.200962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-07-15 16:08:34.200991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-07-15 16:08:34.201184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-07-15 16:08:34.201209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-07-15 16:08:34.201343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-07-15 16:08:34.201385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-07-15 16:08:34.201528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-07-15 16:08:34.201556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-07-15 16:08:34.201756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-07-15 16:08:34.201784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.615 [2024-07-15 16:08:34.201990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.615 [2024-07-15 16:08:34.202017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.615 qpair failed and we were unable to recover it. 00:27:07.615 [2024-07-15 16:08:34.202222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.615 [2024-07-15 16:08:34.202251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.615 qpair failed and we were unable to recover it. 00:27:07.615 [2024-07-15 16:08:34.202397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.615 [2024-07-15 16:08:34.202426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.615 qpair failed and we were unable to recover it. 00:27:07.615 [2024-07-15 16:08:34.202568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.615 [2024-07-15 16:08:34.202596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.615 qpair failed and we were unable to recover it. 00:27:07.615 [2024-07-15 16:08:34.202756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.615 [2024-07-15 16:08:34.202783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.615 qpair failed and we were unable to recover it. 00:27:07.615 [2024-07-15 16:08:34.202993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.615 [2024-07-15 16:08:34.203019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.615 qpair failed and we were unable to recover it. 00:27:07.615 [2024-07-15 16:08:34.203198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.615 [2024-07-15 16:08:34.203226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.615 qpair failed and we were unable to recover it. 00:27:07.615 [2024-07-15 16:08:34.203392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.615 [2024-07-15 16:08:34.203419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.615 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.203605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.203630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.203799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.203824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.203979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.204007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.204191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.204216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.204372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.204397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.204602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.204652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.204827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.204855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.205015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.205040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.205203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.205228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.205366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.205391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.205519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.205543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.205703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.205731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.205875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.205907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.206047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.206071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.206283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.206311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.206489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.206514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.206640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.206665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.206805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.206846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.207051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.207080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.207230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.207258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.207439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.207464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.207595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.207620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.207750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.207775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.207923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.207965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.208169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.208194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.208424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.208479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.208635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.208664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.208856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.208887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.209020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.209045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.209249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.209277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.209424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.209452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.209616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.209644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.209844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.209872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.210035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.210060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.210208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.210236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.210374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.210401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.210584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.210609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.210779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.210807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.210954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.210982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.211131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.211161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.616 [2024-07-15 16:08:34.211371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-15 16:08:34.211396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.616 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.211583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.211633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.211801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.211828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.211982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.212011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.212193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.212220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.212363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.212391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.212568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.212596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.212779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.212807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.212988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.213014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.213195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.213223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.213359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.213387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.213521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.213549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.213698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.213723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.213857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.213904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.214091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.214120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.214325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.214352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.214570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.214595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.214738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.214765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.214914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.214942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.215095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.215122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.215287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.215312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.215476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.215501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.215629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.215654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.215813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.215841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.216028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.216053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.216175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.216217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.216394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.216422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.216571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.216599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.216746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.216771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.216904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.216930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.217144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.217171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.217341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.217368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.217541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.217566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.217734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.217761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.217939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.217968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.218121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.218148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.218318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.218343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.218533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.218561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.218703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.218731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.218898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.218926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.219102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.219128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.219303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-15 16:08:34.219335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.617 qpair failed and we were unable to recover it. 00:27:07.617 [2024-07-15 16:08:34.219496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.219524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.219695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.219723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.219908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.219933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.220119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.220147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.220288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.220316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.220494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.220521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.220704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.220732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.220982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.221008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.221143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.221183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.221358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.221386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.221566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.221590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.221752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.221777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.221913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.221955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.222160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.222188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.222395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.222420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.222598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.222626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.222801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.222828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.222997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.223023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.223179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.223204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.223380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.223407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.223583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.223610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.223777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.223804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.223982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.224008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.224165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.224194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.224405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.224430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.224583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.224608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.224778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.224803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.224984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.225013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.225185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.225214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.225365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.225393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.225537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.225562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.225745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.225772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.225950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.225979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.226151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.226178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.226349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.226374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.226544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.226569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.226701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.226743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.226911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.226939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.227117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.227142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.227331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.227387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.618 [2024-07-15 16:08:34.227599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-15 16:08:34.227627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.618 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.227805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.227833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.227992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.228018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.228149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.228190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.228362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.228390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.228545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.228573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.228741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.228769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.228972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.228998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.229176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.229203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.229336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.229364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.229542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.229567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.229782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.229809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.229955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.229983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.230136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.230163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.230324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.230349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.230478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.230518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.230663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.230691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.230836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.230863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.231077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.231102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.231278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.231306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.231454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.231481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.231656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.231684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.231860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.231892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.232021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.232063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.232239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.232267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.232407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.232435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.232636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.232661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.232873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.232937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.233130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.233156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.233357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.233383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.233560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.233585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.233775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.233803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.233948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.233977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.234149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.234177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.234337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.234362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.234524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.234549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.234755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.234782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.234982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.235011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.235162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.235188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.235344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.619 [2024-07-15 16:08:34.235385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.619 qpair failed and we were unable to recover it. 00:27:07.619 [2024-07-15 16:08:34.235533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.235561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.235742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.235770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.235920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.235946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.236066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.236106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.236299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.236324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.236487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.236512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.236644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.236669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.236796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.236837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.236981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.237009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.237162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.237190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.237372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.237396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.237597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.237625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.237829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.237854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.238046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.238071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.238203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.238232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.238429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.238477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.238652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.238679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.238885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.238928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.239087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.239112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.239314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.239363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.239537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.239564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.239740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.239767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.239947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.239973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.240117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.240145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.240344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.240371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.240516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.240543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.240739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.240764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.240946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.240974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.241163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.241188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.241319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.241344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.241475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.241499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.241624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.241649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.241806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.241848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.620 qpair failed and we were unable to recover it. 00:27:07.620 [2024-07-15 16:08:34.242026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.620 [2024-07-15 16:08:34.242052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.242176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.242202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.242356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.242381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.242547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.242574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.242754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.242781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.242997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.243023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.243195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.243222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.243421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.243448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.243623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.243655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.243831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.243856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.244048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.244075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.244230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.244258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.244393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.244421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.244631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.244655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.244862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.244899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.245069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.245097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.245243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.245270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.245420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.245447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.245612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.245654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.245835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.245860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.246000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.246026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.246220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.246244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.246434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.246483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.246621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.246648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.246814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.246842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.247021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.247047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.247218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.247246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.247386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.247413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.247557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.247585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.247768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.247793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.247946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.247975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.248155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.248183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.248354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.248382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.248534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.248559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.248717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.248742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.248920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.248948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.249134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.249159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.249315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.249339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.249495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.249520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.249658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.249700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.621 [2024-07-15 16:08:34.249890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.621 [2024-07-15 16:08:34.249918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.621 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.250064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.250088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.250268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.250296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.250444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.250472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.250645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.250673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.250847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.250872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.251074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.251125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.251273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.251301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.251472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.251499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.251686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.251711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.251853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.251886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.252025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.252051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.252181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.252206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.252403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.252428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.252557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.252582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.252758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.252786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.252973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.252999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.253156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.253181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.253360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.253387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.253537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.253565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.253736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.253763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.253948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.253974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.254127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.254169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.254356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.254384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.254562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.254590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.254769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.254797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.254961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.254987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.255144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.255186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.255385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.255412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.255565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.255590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.255789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.255818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.256003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.256029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.256195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.256219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.256354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.256379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.256559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.256588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.256732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.256759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.256929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.256959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.257117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.257142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.257330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.257358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.257510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.257540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.257705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.622 [2024-07-15 16:08:34.257733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.622 qpair failed and we were unable to recover it. 00:27:07.622 [2024-07-15 16:08:34.257913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.257938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.258124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.258165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.258344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.258372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.258574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.258602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.258771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.258799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.258986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.259012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.259138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.259181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.259356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.259384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.259579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.259607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.259758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.259786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.259951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.259976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.260113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.260138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.260322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.260347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.260588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.260639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.260817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.260845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.261038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.261064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.261252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.261277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.261457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.261485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.261688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.261717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.261860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.261897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.262100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.262125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.262302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.262329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.262526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.262558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.262709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.262737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.262909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.262935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.263098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.263123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.263312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.263340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.263492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.263519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.263690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.263718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.263916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.263942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.264094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.264119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.264300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.264327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.264521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.264587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.264811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.264839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.265054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.265080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.265238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.265265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.265442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.265467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.265679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.265729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.265889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.265933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.623 qpair failed and we were unable to recover it. 00:27:07.623 [2024-07-15 16:08:34.266068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.623 [2024-07-15 16:08:34.266094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.266252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.266277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.266438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.266466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.266653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.266681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.266827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.266855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.267023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.267064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.267216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.267243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.267438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.267482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.267639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.267682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.267842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.267867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.268029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.268061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.268202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.268228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.268442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.268468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.268688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.268741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.268883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.268909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.269067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.269111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.269320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.269363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.269541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.269568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.269734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.269759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.269930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.269959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.270164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.270208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.270418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.270461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.270647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.270672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.270836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.270862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.271058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.271102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.271281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.271309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.271488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.271534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.271692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.271718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.271920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.271949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.272143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.272187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.272398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.272441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.272620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.272665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.272805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.272832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.272999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.273026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.273217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.624 [2024-07-15 16:08:34.273261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.624 qpair failed and we were unable to recover it. 00:27:07.624 [2024-07-15 16:08:34.273452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.273496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.273707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.273749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.273955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.274013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.274196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.274226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.274404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.274433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.274622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.274664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.274842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.274867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.275016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.275042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.275193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.275221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.275368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.275396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.275566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.275593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.275766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.275795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.275965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.275991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.276122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.276147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.276334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.276364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.276608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.276657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.276833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.276861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.277048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.277074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.277209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.277234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.277452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.277495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.277726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.277754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.277917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.277943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.278103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.278128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.278313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.278341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.278495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.278523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.278693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.278720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.278894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.278942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.279076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.279102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.279288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.279316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.279465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.279502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.279667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.279695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.279852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.279891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.280105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.280130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.280303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.280331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.280530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.280558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.280698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.280726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.280894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.280940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.281096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.281121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.281272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.281297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.281454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.281495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 16:08:34.281642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.625 [2024-07-15 16:08:34.281669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.281849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.281900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.282072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.282098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.282307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.282358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.282534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.282562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.282728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.282755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.282913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.282939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.283099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.283124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.283283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.283311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.283493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.283520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.283758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.283786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.283953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.283979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.284108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.284133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.284320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.284347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.284551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.284606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.284771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.284799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.284962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.284992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.285171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.285199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.285374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.285399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.285591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.285640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.285849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.285886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.286045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.286070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.286207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.286232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.286408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.286436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.286585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.286613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.286833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.286858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.287038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.287063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.287220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.287250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.287413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.287441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.287613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.287641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.287832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.287857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.287999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.288025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.288186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.288214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.288354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.288383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.288537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.288562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.288724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.288749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.288937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.288963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.289145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.289173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.289353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.289379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.289553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.289581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.289759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.289786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.626 qpair failed and we were unable to recover it. 00:27:07.626 [2024-07-15 16:08:34.289926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.626 [2024-07-15 16:08:34.289954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.290129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.290155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.290343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.290398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.290572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.290600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.290738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.290766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.290949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.290974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.291122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.291150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.291297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.291325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.291494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.291521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.291696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.291721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.291864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.291902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.292107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.292135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.292304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.292331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.292515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.292541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.292743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.292771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.292923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.292952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.293135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.293162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.293340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.293365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.293551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.293580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.293749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.293778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.293920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.293948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.294128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.294153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.294285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.294310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.294441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.294466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.294625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.294653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.294826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.294850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.295049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.295078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.295251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.295279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.295430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.295458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.295642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.295669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.295826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.295854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.296016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.296044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.296189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.296217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.296362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.296387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.296543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.296587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.296760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.296788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.296928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.296958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.297136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.297163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.297335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.297362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.297530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.297558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.297707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.297734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.627 [2024-07-15 16:08:34.297952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.627 [2024-07-15 16:08:34.297977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.627 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.298161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.298189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.298334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.298367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.298566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.298594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.298768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.298793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.298968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.298996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.299142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.299170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.299351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.299376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.299530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.299556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.299691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.299716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.299851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.299884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.300059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.300087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.300227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.300252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.300412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.300454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.300630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.300658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.300846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.300872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.301051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.301076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.301279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.301336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.301507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.301535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.301685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.301714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.301902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.301929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.302068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.302094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.302277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.302305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.302454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.302482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.302669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.302694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.302888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.302914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.303093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.303121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.303263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.303291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.303469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.303494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.303629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.303658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.303818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.303843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.304050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.304075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.304229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.304254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.304385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.304410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.304543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.304568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.304775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.628 [2024-07-15 16:08:34.304802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.628 qpair failed and we were unable to recover it. 00:27:07.628 [2024-07-15 16:08:34.304987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.305013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.305219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.305247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.305419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.305446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.305599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.305626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.305809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.305834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.305992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.306021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.306195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.306223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.306398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.306427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.306603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.306628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.306830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.306858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.307083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.307112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.307286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.307311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.307475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.307500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.307653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.307705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.307844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.307872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.308057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.308085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.308239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.308264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.308535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.308596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.308804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.308829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.308970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.308996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.309152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.309181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.309359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.309387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.309564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.309592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.309734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.309762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.309914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.309939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.310115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.310143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.310292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.310319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.310497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.310525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.310704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.310729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.310932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.310960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.311110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.311138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.311285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.311313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.311495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.311521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.311695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.311723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.311894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.311923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.312095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.312124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.312299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.312324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.312474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.312502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.312672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.312699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.312889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.312915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.313077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.629 [2024-07-15 16:08:34.313104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.629 qpair failed and we were unable to recover it. 00:27:07.629 [2024-07-15 16:08:34.313283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.313311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.313461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.313489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.313628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.313656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.313816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.313841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.314003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.314029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.314228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.314253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.314389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.314414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.314543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.314569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.314739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.314767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.314942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.314971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.315111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.315139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.315320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.315345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.315485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.315509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.315716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.315744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.315938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.315967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.316123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.316148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.316283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.316309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.316465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.316490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.316642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.316670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.316870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.316902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.317116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.317144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.317330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.317355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.317496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.317522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.317681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.317706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.317912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.317941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.318094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.318122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.318295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.318322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.318506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.318531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.318661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.318701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.318875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.318912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.319064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.319092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.319255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.319281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.319442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.319485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.319656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.319685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.319892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.319921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.320078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.320105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.320269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.320294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.320415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.320440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.320615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.320643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.320792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.320818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.630 qpair failed and we were unable to recover it. 00:27:07.630 [2024-07-15 16:08:34.320973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.630 [2024-07-15 16:08:34.321018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.321217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.321245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.321419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.321446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.321624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.321650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.321829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.321857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.322031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.322059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.322211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.322238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.322417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.322446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.322621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.322649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.322845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.322873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.323041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.323069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.323216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.323240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.323416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.323457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.323616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.323644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.323819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.323845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.324008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.324034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.324267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.324318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.324533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.324558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.324729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.324756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.324940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.324966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.325143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.325171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.325353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.325381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.325523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.325550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.325706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.325732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.325871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.325903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.326088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.326115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.326268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.326298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.326481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.326506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.326678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.326705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.326905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.326934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.327116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.327141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.327303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.327329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.327505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.327532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.327733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.327761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.327915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.327948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.328124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.328149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.328393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.328440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.328622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.328650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.328826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.328854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.329012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.329038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.329201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.329226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.329401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.631 [2024-07-15 16:08:34.329429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.631 qpair failed and we were unable to recover it. 00:27:07.631 [2024-07-15 16:08:34.329570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.329599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.329774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.329799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.330010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.330039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.330188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.330216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.330385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.330413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.330575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.330600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.330807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.330836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.331054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.331083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.331234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.331262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.331419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.331444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.331658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.331686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.331832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.331860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.332018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.332047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.332230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.332255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.332454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.332503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.332646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.332674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.332846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.332874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.333045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.333071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.333230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.333255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.333384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.333430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.333573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.333601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.333778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.333803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.333976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.334004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.334143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.334171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.334321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.334349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.334533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.334558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.334740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.334765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.334928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.334956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.335124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.335152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.335297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.335322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.335442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.335467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.335666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.335691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.335817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.335857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.336023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.336048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.336174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.336200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.336388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.336416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.336563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.336592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.336740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.336766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.336947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.336978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.337125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.337154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.337319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.337347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.632 qpair failed and we were unable to recover it. 00:27:07.632 [2024-07-15 16:08:34.337550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.632 [2024-07-15 16:08:34.337575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.337743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.337772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.337944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.337973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.338140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.338168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.338372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.338397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.338535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.338563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.338754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.338783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.338952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.338981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.339137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.339163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.339313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.339353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.339565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.339591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.339720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.339745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.339884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.339911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.340099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.340125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.340308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.340336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.340489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.340519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.340678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.340703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.340909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.340938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.341079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.341107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.341283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.341315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.341501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.341526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.341679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.341704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.341882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.341908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.342070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.342096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.342224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.342249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.342401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.342426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.342607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.342635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.342783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.342811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.342992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.343018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.343189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.343217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.343384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.343412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.343611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.343639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.343823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.343848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.344017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.344045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.344246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.344274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.344445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.633 [2024-07-15 16:08:34.344473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.633 qpair failed and we were unable to recover it. 00:27:07.633 [2024-07-15 16:08:34.344626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.344651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.344809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.344834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.344994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.345023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.345197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.345225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.345379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.345404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.345559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.345601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.345748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.345776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.345922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.345949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.346128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.346153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.346330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.346358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.346539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.346571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.346748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.346777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.346950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.346976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.347108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.347133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.347296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.347338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.347492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.347520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.347699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.347724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.347902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.347931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.348075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.348104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.348248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.348276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.348433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.348458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.348658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.348708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.348850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.348885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.349036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.349064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.349248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.349274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.349477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.349528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.349702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.349730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.349872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.349908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.350112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.350137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.350299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.350349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.350520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.350548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.350689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.350718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.350885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.350911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.351064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.351089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.351295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.351323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.351497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.351525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.351694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.351719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.351912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.351946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.352150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.352178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.352355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.352383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-07-15 16:08:34.352531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-07-15 16:08:34.352556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.352717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.352760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.352940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.352969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.353142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.353169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.353321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.353346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.353506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.353550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.353771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.353796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.353965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.353991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.354145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.354170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.354382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.354423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.354618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.354646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.354825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.354854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.355039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.355065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.355240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.355268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.355466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.355493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.355644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.355672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.355885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.355911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.356118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.356146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.356318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.356346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.356491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.356519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.356698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.356723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.356905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.356934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.357112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.357140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.357295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.357323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.357484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.357509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.357680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.357705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.357838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.357864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.358052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.358081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.358266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.358291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.358441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.358494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.358692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.358720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.358893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.358921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.359097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.359123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.359318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.359368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.359546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.359574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.359761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.359786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.359943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.359970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.360207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.360257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.360434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.360462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.360659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.360687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.360837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.360862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.361038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-07-15 16:08:34.361064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-07-15 16:08:34.361197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.361222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.361411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.361438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.361622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.361647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.361794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.361822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.362016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.362045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.362219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.362247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.362427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.362452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.362628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.362656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.362823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.362851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.363011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.363037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.363198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.363223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.363427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.363477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.363643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.363671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.363808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.363839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.364027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.364053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.364250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.364292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.364490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.364520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.364722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.364750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.364959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.364985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.365146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.365174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.365350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.365378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.365556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.365584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.365760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.365785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.365919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.365949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.366106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.366131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.366314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.366339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.366501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.366526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.366681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.366706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.366929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.366959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.367113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.367140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.367353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.367379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.367517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.367543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.367681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.367705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.367898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.367927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.368138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.368164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.368376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.368426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.368615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.368640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.368773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.368798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.368982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.369008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.369164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.369192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.369341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.369369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-07-15 16:08:34.369541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-07-15 16:08:34.369569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.369748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.369773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.369956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.369985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.370129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.370157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.370333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.370360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.370514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.370540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.370724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.370752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.370929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.370958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.371093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.371121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.371303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.371332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.371484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.371537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.371719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.371747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.371927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.371956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.372130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.372154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.372318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.372343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.372508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.372534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.372709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.372737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.372912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.372938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.373077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.373102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.373283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.373308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.373523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.373548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.373680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.373705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.373861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.373911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.374063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.374091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.374294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.374322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.374506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.374532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.374702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.374730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.374871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.374908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.375111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.375139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.375297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.375323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.375444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.375469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.375667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.375695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.375838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.375865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.376053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.376078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.376221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.376247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.376384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.376408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-07-15 16:08:34.376587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-07-15 16:08:34.376619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.376796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.376821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.376958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.376987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.377155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.377182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.377321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.377349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.377567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.377592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.377776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.377804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.377981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.378009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.378158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.378186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.378361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.378387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.378612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.378671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.378853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.378888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.379037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.379066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.379243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.379268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.379433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.379458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.379620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.379661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.379817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.379846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.380027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.380052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.380215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.380239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.380402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.380428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.380581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.380608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.380780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.380805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.380956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.380985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.381159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.381187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.381338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.381366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.381517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.381542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.381712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.381740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.381919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.381948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.382104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.382132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.382338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.382363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.382537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.382585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.382786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.382814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.382975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.383003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.383154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.383180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.383365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.383390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.383550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.383578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.383776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.383804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.383998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.384024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.384153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.384179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.384382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.384407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.384562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.384587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.384740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-07-15 16:08:34.384769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-07-15 16:08:34.384902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.384929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.385083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.385108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.385296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.385324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.385500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.385525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.385699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.385726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.385865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.385900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.386052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.386081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.386266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.386291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.386509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.386560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.386763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.386791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.386995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.387023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.387180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.387205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.387391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.387442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.387593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.387622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.387794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.387822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.387990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.388015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.388192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.388220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.388418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.388446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.388620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.388647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.388795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.388820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.389031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.389060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.389213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.389242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.389424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.389449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.389607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.389631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.389815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.389843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.390007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.390035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.390205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.390237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.390383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.390409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.390543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.390568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.390762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.390789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.390933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.390962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.391146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.391171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.391378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.391427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.391627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.391654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.391830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.391858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.392019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.392044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.392218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.392246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.392411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.392439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.392590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.392617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.392801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.392826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-07-15 16:08:34.393009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-07-15 16:08:34.393038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.393184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.393214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.393388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.393416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.393603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.393628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.393786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.393813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.393955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.393984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.394189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.394217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.394372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.394397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.394581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.394606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.394786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.394815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.394996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.395022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.395189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.395215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.395355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.395380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.395584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.395616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.395820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.395848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.396046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.396071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.396204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.396246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.396428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.396457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.396623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.396651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.396832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.396858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.397042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.397070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.397239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.397267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.397442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.397470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.397620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.397646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.397857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.397894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.398077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.398106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.398273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.398300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.398480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.398505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.398638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.398680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.398828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.398856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.399046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.399071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.399231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.399257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.399471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.399499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.399673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.399701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.399909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.399938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.400096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.400121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.400287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.400315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.400484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.400512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.400661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.400689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.400860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.400893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.401101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.401129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.401290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-07-15 16:08:34.401319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-07-15 16:08:34.401460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.401488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.401675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.401700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.401850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.401898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.402051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.402079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.402275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.402303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.402483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.402508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.402714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.402742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.402911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.402940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.403117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.403145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.403301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.403326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.403501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.403529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.403706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.403734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.403909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.403938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.404091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.404116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.404243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.404268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.404454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.404481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.404661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.404689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.404868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.404898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.405049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.405077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.405253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.405281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.405424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.405452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.405625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.405650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.405799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.405827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.406019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.406045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.406194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.406222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.406402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.406426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.406564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.406605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.406778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-07-15 16:08:34.406806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-07-15 16:08:34.406973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.407001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.407140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.407165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.407414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.407463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.407668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.407697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.407874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.407911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.408069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.408094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.408231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.408274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.408483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.408508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.408689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.408717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.408870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.408904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.409041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.409066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.409202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.409230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.409418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.409447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.409607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.409632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.409766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.409809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.409992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.410021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.410196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.410223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.410424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.410449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.410574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.410599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.410732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.410757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.410914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.410944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.411101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.411127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.411297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.411325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.411462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.411489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.411635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.411662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.411820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.411845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.412063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.412091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.412239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.412267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.412445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.412474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.412677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.412703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.412847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.412883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.413054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.413082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.413259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.413287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.413443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.413467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.413639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.413696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.413838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.413866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.414062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.414090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.414245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.414270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.414401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.414429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.414619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-07-15 16:08:34.414647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-07-15 16:08:34.414819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.414846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.415001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.415027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.415180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.415221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.415395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.415422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.415598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.415626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.415797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.415822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.415957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.416000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.416147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.416175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.416349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.416377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.416518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.416544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.416702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.416744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.416917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.416946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.417156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.417184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.417392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.417418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.417596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.417624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.417792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.417819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.417985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.418013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.418193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.418218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.418394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.418422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.418588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.418616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.418818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.418845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.419033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.419059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.419197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.419225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.419399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.419427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.419609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.419634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.419785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.419809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.419995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.420024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.420203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.420231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.420385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.420413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.420596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.420621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.420830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.420857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.421063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.421090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.421273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.421301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.421444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.421468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.421594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.421620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-07-15 16:08:34.421832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-07-15 16:08:34.421860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.422021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.422046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.422176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.422201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.422365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.422390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.422565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.422593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.422741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.422769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.422925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.422950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.423090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.423131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.423307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.423335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.423506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.423534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.423676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.423701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.423855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.423907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.424045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.424072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.424208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.424236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.424420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.424445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.424602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.424654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.424809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.424836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.425008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.425037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.425197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.425222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.425413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.425463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.425631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.425658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.425858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.425896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.426049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.426074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.426230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.426273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.426444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.426472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.426645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.426673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.426825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.426850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.427032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.427059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.427269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.427297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.427468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.427497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.427674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.427699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.427823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.427870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.428086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.428114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.428286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.428314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.428488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.428513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.428711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.428779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.428956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.428982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.429180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.429208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.429372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.429398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.429536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.429561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.429721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.429764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-07-15 16:08:34.429914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-07-15 16:08:34.429943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.430117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.430142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.430330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.430384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.430554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.430584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.430758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.430786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.430968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.430993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.431175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.431203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.431348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.431376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.431541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.431568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.431756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.431781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.431950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.431999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.432164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.432191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.432367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.432394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.432551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.432576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.432703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.432745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.432924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.432952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.433128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.433156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.433333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.433361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.433549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.433601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.433769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.433795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.433960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.434002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.434180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.434206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.434409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.434437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.434588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.434616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.434815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.434843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.435012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.435038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.435161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.435203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.435369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.435398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.435538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.435566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.435718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.435745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.435889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.435915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.436048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.436073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.436248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.436276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.436447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.436472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.436660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.436688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.436888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.436914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.437073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-07-15 16:08:34.437098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-07-15 16:08:34.437257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.437282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.437441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.437469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.437623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.437650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.437825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.437852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.438015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.438040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.438231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.438285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.438436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.438464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.438641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.438674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.438874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.438908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.439065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.439090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.439230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.439255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.439409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.439434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.439567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.439592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.439722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.439763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.439911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.439940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.440089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.440117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.440283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.440308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.440500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.440548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.440718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.440745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.440896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.440935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.441124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.441149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.441303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.441330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.441500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.441528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.441679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.441706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.441888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.441914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.442095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.442123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.442295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.442323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.442478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.442505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.442687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.442711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.442906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.442936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.443120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.443146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.443298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.443340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.443480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.443505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-07-15 16:08:34.443658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-07-15 16:08:34.443699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.443848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.443885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.444100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.444128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.444278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.444303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.444462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.444504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.444701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.444730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.444911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.444940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.445096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.445121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.445272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.445297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.445445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.445473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.445620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.445647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.445830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.445856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.446047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.446075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.446245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.446273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.446470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.446498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.446682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.446708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.446915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.446944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.447084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.447112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.447254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.447281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.447464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.447489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.447698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.447748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.447959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.447984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.448166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.448194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.448350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.448374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.448516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.448541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.448736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.448761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.448934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.448962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.449139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.449164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.449357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.449406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.449547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.449575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.449725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.449753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.449959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.449985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.450162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.450190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.450326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.450354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.450488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.450516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.450688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.450713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.450892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.450920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.451094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.451122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.451266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-07-15 16:08:34.451294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-07-15 16:08:34.451473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.451498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.451639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.451664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.451816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.451841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.451979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.452009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.452166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.452191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.452404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.452433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.452635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.452660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.452844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.452872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.453074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.453099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.453252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.453277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.453464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.453492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.453623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.453651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.453806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.453832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.454014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.454043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.454211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.454240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.454392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.454420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.454627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.454652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.454873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.454911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.455093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.455119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.455274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.455319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.455469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.455495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.455658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.455683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.455843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.455870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.456018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.456045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.456224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.456249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.456410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.456435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.456590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.456615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.456800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.456828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.456996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.457022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.457179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.457204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.457400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.457433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.457572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.457599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.457776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.457802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.457982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.458011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.458157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.458185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.458367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.458392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.458572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.458597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.458777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.458805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.458941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-07-15 16:08:34.458970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-07-15 16:08:34.459148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.459175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.459349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.459374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.459565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.459615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.459756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.459783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.459984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.460012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.460224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.460249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.460425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.460453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.460607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.460635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.460781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.460809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.460960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.460986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.461110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.461151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.461334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.461360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.461520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.461546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.461707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.461732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.461910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.461939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.462105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.462132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.462305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.462333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.462516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.462541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.462712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.462740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.462892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.462921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.463093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.463121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.463307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.463332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.463505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.463533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.463732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.463760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.463927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.463955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.464158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.464183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.464424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.464451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.464624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.464652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.464858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.464898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.465049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.465075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.465248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.465276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.465451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.465479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.465620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.465648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.465822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.465848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.466034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.466063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.466196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.466224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.466371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.466399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.466557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.466582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.466717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.466759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.466958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.466987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.467178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-07-15 16:08:34.467203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-07-15 16:08:34.467358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.467383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.467527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.467555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.467697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.467725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.467907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.467936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.468116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.468141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.468296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.468324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.468499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.468526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.468659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.468687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.468863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.468895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.469032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.469057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.469208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.469233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.469410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.469438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.469611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.469637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.469792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.469820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.469997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.470026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.470175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.470203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.470352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.470377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.470532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.470575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.470763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.470792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.470924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.470950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.471083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.471108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.471265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.471290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.471449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.471474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.471629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.471658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.471864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.471901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.472047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.472075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.472222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.472249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.472419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.472446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.472624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.472649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.472825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.472852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.473002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.473031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.473249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.473274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.473442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.473467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.473618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.473669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.473823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.473851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-07-15 16:08:34.474044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-07-15 16:08:34.474069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.474232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.474257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.474501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.474553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.474751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.474778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.474952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.474981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.475188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.475213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.475426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.475474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.475654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.475682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.475854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.475883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.476020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.476044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.476181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.476227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.476405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.476432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.476634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.476662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.476838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.476863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.477076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.477104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.477249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.477279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.477460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.477488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.477674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.477699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.477853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.477889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.478091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.478119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.478271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.478300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.478459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.478484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.478642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.478685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.478838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.478867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.479055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.479083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.479263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.479288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.479470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.479498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.479673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.479701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.479907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.479933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.480118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.480144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.480320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.480370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.480521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.480549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.480714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.480742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.480888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.480913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.481068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.481112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.481255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.481284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.481486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.481514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.481695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.481726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.481934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.481963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.482110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.482137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.482335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.482363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.482515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.482539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-07-15 16:08:34.482715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.651 [2024-07-15 16:08:34.482743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.482955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.482984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.483116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.483144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.483301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.483326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.483465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.483490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.483640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.483665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.483851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.483887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.484047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.484074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.484210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.484250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.484441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.484469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.484614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.484642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.484831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.484856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.485035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.485064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.485249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.485275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.485451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.485479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.485641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.485666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.485851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.485885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.486027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.486052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.486207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.486232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.486393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.486418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.486575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.486600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.486734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.486758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.486912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.486941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.487124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.487149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.487318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.487345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.487485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.487513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.487689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.487716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.487890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.487916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.488123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.488151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.488300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.488328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.488475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.488503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.488705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.488730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.488927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.488957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.489138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.489167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.489337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.489365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.489549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.489574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.489713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.489738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.489929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.489958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.490162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.490190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.490343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.490368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.490588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.490641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.490842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.490870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.491024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.491052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.491221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.491246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.491443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.652 [2024-07-15 16:08:34.491485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.652 [2024-07-15 16:08:34.491653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.491681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.491855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.491891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.492065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.492090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.492280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.492329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.492505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.492533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.492679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.492708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.492893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.492919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.493097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.493125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.493264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.493292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.493468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.493495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.493647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.493672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.493824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.493866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.494054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.494082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.494230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.494257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.494434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.494459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.494615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.494642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.494788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.494815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.494966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.494996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.495174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.495203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.495402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.495452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.495615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.495643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.495819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.495847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.496037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.496062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.496211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.496239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.496412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.496439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.496588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.496616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.496806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.496833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.497045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.497070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.497216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.497244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.497436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.497464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.497671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.497697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.497846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.497873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.498064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.498089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.498229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.498254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.498386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.498412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.498539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.498581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.498764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.498789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.498928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.498954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.499079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.499103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.499274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.499303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.499512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.499537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.499716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.499744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.499897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.499923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.653 qpair failed and we were unable to recover it. 00:27:07.653 [2024-07-15 16:08:34.500098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.653 [2024-07-15 16:08:34.500126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.500263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.500291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.500492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.500524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.500679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.500704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.500836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.500884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.501058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.501086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.501255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.501283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.501431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.501457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.501596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.501621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.501745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.501771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.501952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.501980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.502131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.502156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.502297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.502322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.502451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.502477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.502669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.502697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.502855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.502889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.503032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.503057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.503253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.503282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.503500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.503553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.503734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.503759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.503936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.503965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.504134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.504162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.504334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.504363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.504542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.504567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.504742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.504770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.504978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.505003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.505162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.505187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.505345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.505370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.505578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.505606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.505803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.505831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.506005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.506030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.506157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.506182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.506314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.506339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.506472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.506497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.506688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.506716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.506899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.506925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.507107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.507132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.507327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.507352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.507513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.507538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.507673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.507699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.507906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.507935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.654 qpair failed and we were unable to recover it. 00:27:07.654 [2024-07-15 16:08:34.508088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.654 [2024-07-15 16:08:34.508113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.508269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.508309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.508519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.508544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.508719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.508747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.508920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.508948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.509101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.509129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.509279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.509304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.509442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.509483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.509632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.509660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.509857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.509903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.510054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.510079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.510218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.510243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.510400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.510425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.510635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.510663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.510813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.510838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.510984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.511025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.511226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.511255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.511423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.511451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.511657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.511682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.511856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.511891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.512070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.512098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.512240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.512268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.512447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.512471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.512618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.512659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.512844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.512872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.513035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.513063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.513243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.513268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.513496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.513547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.513699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.513727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.513901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.513931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.514092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.514117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.514298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.514326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.514467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.514495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.514643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.514670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.514846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.514871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.515058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.515087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.515231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.515258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.515431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.515458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.515637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.515662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.515782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.515822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.515986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.516017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.516202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.516227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.516356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.516381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.516511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.516552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.516728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.516756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.516928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.516957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.655 qpair failed and we were unable to recover it. 00:27:07.655 [2024-07-15 16:08:34.517100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.655 [2024-07-15 16:08:34.517126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.517307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.517334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.517502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.517530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.517706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.517734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.517916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.517943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.518121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.518149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.518324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.518352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.518555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.518583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.518740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.518766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.518921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.518947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.519104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.519136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.519319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.519347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.519523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.519548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.519718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.519746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.519900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.519929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.520084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.520112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.520318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.520343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.520547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.520598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.520778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.520806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.520986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.521014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.521169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.521195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.521354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.521379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.521594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.521621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.521775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.521804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.521961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.521987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.522112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.522137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.522311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.522339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.522484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.522512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.522719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.522744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.522900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.522929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.523115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.523140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.523277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.523302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.523456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.523481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.523603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.523643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.523850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.523885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.656 [2024-07-15 16:08:34.524066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.656 [2024-07-15 16:08:34.524094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.656 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.524296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.524322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.524463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.524494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.524668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.524696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.524851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.524886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.525036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.525062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.525216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.525257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.525444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.525469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.525600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.525626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.525783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.525807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.525966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.525996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.526146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.526175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.526347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.526375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.526541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.526566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.526771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.526799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.526973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.527002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.527183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.527211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.527389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.527414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.527546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.527589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.527792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.527820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.527994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.528020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.528151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.528176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-07-15 16:08:34.528343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-07-15 16:08:34.528371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.528580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.528608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.528760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.528787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.528941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.528966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.529103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.529128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.529292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.529334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.529519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.529544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.529703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.529728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.529893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.529923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.530089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.530117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.530265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.530293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.530474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.530499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.530679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.530707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.530842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.530870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.531040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.531068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.531254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.531280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.531518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.531567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.531767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.531795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.531940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.531969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.532153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.532179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.532360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.532388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.532540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.532568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.532743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.532768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.532923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.532949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.533157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.533185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.533329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.533357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.533531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.533560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.533742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.533767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.533898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.533943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.534138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.534164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.534296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.534321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.534450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.534475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.534600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.534642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.534808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.534836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.535003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.535028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.535187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.535212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.535352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.535377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.535502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.535528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.535721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.535749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.535909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-07-15 16:08:34.535935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-07-15 16:08:34.536092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.536117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.536305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.536333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.536505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.536533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.536713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.536738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.536903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.536932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.537070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.537099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.537248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.537276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.537485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.537511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.537694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.537727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.537931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.537960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.538157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.538182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.538339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.538364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.538570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.538624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.538811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.538839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.539027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.539052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.539188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.539213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.539445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.539494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.539671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.539698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.539847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.539896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.540079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.540104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.540264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.540292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.540470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.540499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.540705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.540730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.540910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.540936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.541098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.541126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.541292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.541320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.541506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.541531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.541686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.541711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.541918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.541948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.542113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.542141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.542313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.542340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.542513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.542538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.542718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.542746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.542900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.542929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.543069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-07-15 16:08:34.543097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-07-15 16:08:34.543253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.543283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.543474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.543524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.543724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.543752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.543904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.543933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.544124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.544149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.544296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.544321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.544522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.544550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.544724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.544752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.544924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.544952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.545133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.545162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.545360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.545388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.545599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.545627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.545802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.545827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.545970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.545996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.546128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.546153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.546341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.546369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.546577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.546602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.546798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.546825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.547007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.547036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.547251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.547276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.547434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.547458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.547666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.547715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.547937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.547962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.548149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.548175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.548364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.548389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.548559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.548586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.548772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.548800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.548986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.549014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.549199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.549224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.549383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.549411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.549583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.549611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.549766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.549793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.549971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.549997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.550150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.550178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.550363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.550389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.550560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.550588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.550767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.550791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.550963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.550992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.551173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.551198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.551328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.551352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-07-15 16:08:34.551535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-07-15 16:08:34.551560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.551741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.551770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.551941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.551969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.552135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.552163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.552321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.552346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.552509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.552534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.552669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.552694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.552866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.552902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.553061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.553086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.553291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.553318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.553567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.553594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.553808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.553836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.554055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.554081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.554261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.554289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.554473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.554501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.554682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.554711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.554896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.554921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.555091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.555119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.555297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.555324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.555499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.555527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.555677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.555702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.555904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.555933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.556079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.556107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.556310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.556338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.556520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.556545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.556737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.556787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.556963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.556992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.557143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.557171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.557360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.557389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.557518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.557560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.557742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.557767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.557907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.557951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.558163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.558188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.558342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.558370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.558618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-07-15 16:08:34.558645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-07-15 16:08:34.558832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.558860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.559018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.559043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.559245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.559273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.559451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.559479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.559629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.559657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.559838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.559862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.560049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.560077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.560258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.560287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.560431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.560459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.560622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.560647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.560827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.560855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.561071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.561099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.561237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.561265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.561416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.561441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.561615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.561643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.561812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.561840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.562019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.562047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.562198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.562223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.562381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.562406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.562563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.562588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.562772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.562807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.562965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.562991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.563155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.563180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.563357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.563382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.563546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.563571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.563698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.563723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.563888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.563914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.564086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.564114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.564246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.564274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.564451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.564477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.564630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.564685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.564865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.564906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.565084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.565112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.565263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-07-15 16:08:34.565288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-07-15 16:08:34.565443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.565485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.565670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.565697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.565903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.565932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.566075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.566101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.566242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.566267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.566482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.566510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.566690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.566715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.566845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.566869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.567016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.567058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.567244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.567272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.567479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.567507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.567719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.567744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.567932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.567960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.568164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.568196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.568397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.568425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.568609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.568634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.568766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.568791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.568968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.568997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.569172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.569200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.569379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.569404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.569602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.569657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.569862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.569898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.570087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.570115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.570273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.570298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.570432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.570457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.570604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.570631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.570805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.570833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.571049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.571075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.571207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.571248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.571427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.571455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.571625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.571653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.571810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.571835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.572016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.572060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.572203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.572231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.572432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.572457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.572578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.572602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.572739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.572782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.572919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.572947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-07-15 16:08:34.573096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-07-15 16:08:34.573124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.573305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.573330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.573531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.573559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.573709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.573738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.573889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.573917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.574096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.574121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.574330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.574358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.574524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.574552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.574749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.574777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.574959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.574985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.575164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.575193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.575391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.575420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.575598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.575623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.575770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.575795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.575916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.575942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.576109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.576151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.576298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.576326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.576478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.576503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.576640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.576683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.576860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.576897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.577074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.577102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.577280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.577305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.577474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.577501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.577625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.577650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.577798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.577826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.578012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.578038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.578194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.578219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.578390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.578415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.578572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.578601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.578779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.578804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.578994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.579023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.579163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.579190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.579372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.579400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.579602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.579627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.579800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.579828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.579978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.580007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.580208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.580236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.580385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.580411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-07-15 16:08:34.580542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-07-15 16:08:34.580583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.580730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.580758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.580960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.580989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.581157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.581183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.581362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.581390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.581595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.581628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.581794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.581821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.582035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.582064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.582248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.582273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.582475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.582503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.582657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.582685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.582859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.582903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.583061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.583089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.583269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.583297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.583458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.583486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.583647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.583672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.583817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.583842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.584018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.584044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.584257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.584286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.584442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.584467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.584668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.584696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.584869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.584904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.585109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.585137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.585314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.585339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.585550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.585598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.585774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.585802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.585953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.585982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.586143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.586168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.586381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.586429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.586576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.586605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.586804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.586832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.587015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.587041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.587233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.587289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.587432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.587459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.587631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.587659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.587846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.587871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.588062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.588090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.588234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.588262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.588436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.588466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.588669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.588694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.588884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.588910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-07-15 16:08:34.589066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-07-15 16:08:34.589108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.589253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.589281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.589461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.589487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.589642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.589667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.589799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.589824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.590039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.590065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.590223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.590248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.590420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.590448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.590622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.590650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.590798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.590826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.591015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.591041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.591203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.591258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.591452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.591481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.591642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.591670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.591844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.591869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.592084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.592113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.592268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.592300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.592499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.592527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.592673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.592699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.592884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.592913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.593051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.593079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.593255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.593284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.593491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.593516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.593701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.593756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.593923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.593951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.594149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.594177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.594360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.594385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.594555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.594583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.594725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.594753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.594931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.594960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.595113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.595138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.595355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.595383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.595568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.595593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.595730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.595755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.595893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-07-15 16:08:34.595918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-07-15 16:08:34.596124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.596151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.596306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.596334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.596481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.596509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.596713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.596738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.596952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.596980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.597152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.597179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.597359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.597387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.597568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.597593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.597754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.597779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.597911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.597937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.598092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.598119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.598273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.598297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.598423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.598448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.598645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.598673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.598842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.598869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.599044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.599069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.599207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.599250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.599421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.599449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.599628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.599656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.599857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.599890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.600064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.600092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.600246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.600273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.600471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.600499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.600679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.600703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.600906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.600939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.601132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.601157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.601315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.601340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.601496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.601521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.601661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.601691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.601869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.601903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.602110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.602138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.602327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.602352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.602531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.602558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.602738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.602765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.602914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.602944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.603121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.603145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.603347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.603398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.603601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.603629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.603813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.603840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.604024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.604050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.604203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.604231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-07-15 16:08:34.604406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-07-15 16:08:34.604434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.604570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.604598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.604774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.604798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.604980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.605009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.605184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.605212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.605360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.605387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.605563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.605588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.605742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.605770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.605947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.605975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.606175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.606203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.606359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.606388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.606519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.606544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.606707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.606750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.606935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.606961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.607118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.607144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.607346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.607395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.607567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.607595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.607764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.607792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.607965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.607992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.608138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.608163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.608304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.608345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.608519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.608547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.608720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.608745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.608926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.608955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.609099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.609128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.609328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.609356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.609540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.609566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.609706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.609735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.609911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.609940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.610116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.610144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.610323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.610348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.610470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.610512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.610651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.610679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.610884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.610913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.611121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.611146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.611287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.611315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.611492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.611520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.611694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.611726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.611909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.611935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.612090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.612118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.612261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.612289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-07-15 16:08:34.612462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-07-15 16:08:34.612490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.612645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.612670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.612804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.612830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.612995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.613023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.613172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.613200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.613360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.613386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.613512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.613537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.613695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.613737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.613945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.613974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.614148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.614174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.614380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.614408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.614574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.614601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.614748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.614776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.614935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.614961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.615197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.615239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.615386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.615414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.615583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.615611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.615792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.615817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.616000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.616026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.616219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.616244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.616447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.616475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.616631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.616656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.616861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.616895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.617075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.617103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.617255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.617283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.617459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.617484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.617661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.617689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.617839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.617867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.618081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.618109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.618264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.618289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.618421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.618462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.618636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.618663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.618806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.618834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.619016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.619042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.619257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.619285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.619479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.619507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.619647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.619675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.619834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.619859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.620047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.620076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.620283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.620308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.620483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.620511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-07-15 16:08:34.620652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-07-15 16:08:34.620678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.620855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.620891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.621079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.621104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.621240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.621265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.621426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.621451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.621658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.621710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.621904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.621933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.622088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.622117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.622307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.622332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.622526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.622571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.622754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.622782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.622983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.623012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.623224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.623249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.623427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.623476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.623621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.623648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.623841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.623869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.624058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.624084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.624263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.624316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.624548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.624573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.624819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.624847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.625031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.625057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.625214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.625268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.625468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.625496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.625665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.625697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.625849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.625874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.626061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.626089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.626255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.626283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.626482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.626510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.626687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.626712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.626862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.626899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.627043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.627072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.627245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.627272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.627451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.627476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.627611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.627653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.627792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.627820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.628013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-07-15 16:08:34.628038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-07-15 16:08:34.628195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.628219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.628352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.628378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.628546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.628571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.628730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.628758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.628915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.628940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.629117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.629145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.629327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.629355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.629563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.629591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.629741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.629766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.629975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.630004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.630175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.630203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.630348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.630376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.630557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.630582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.630709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.630752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.630902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.630935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.631138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.631167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.631366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.631391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.631594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.631622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.631798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.631826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.632031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.632056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.632236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.632261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.632439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.632467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.632669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.632697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.632870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.632907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.633084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.633109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.633297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.633324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.633508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.633536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.633684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.633712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.633885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.633911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.634118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.634147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.634329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.634357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.634491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.634519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.634702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.634727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.634934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.634962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.635108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.635135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.635346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.635371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.635527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.635553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.635743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.635771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.635976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.636005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.636176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.636204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.636357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-07-15 16:08:34.636382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-07-15 16:08:34.636538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.636580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.636720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.636748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.636925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.636953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.637135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.637160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.637323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.637349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.637531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.637557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.637709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.637737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.637916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.637942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.638125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.638153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.638297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.638325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.638499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.638528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.638728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.638753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.638954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.638983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.639134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.639162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.639361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.639389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.639575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.639601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.639770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.639797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.639950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.639979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.640149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.640177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.640355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.640380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.640537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.640563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.640720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.640744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.640872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.640904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.641071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.641097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.641254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.641279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.641462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.641487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.641621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.641647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.641811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.641836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.642011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.642040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.642211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.642239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.642397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.642422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.642600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.642630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.642829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.642857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.643022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.643048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.643204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.643230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.643360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.643385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.643544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.643570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.643754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.643782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.643962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.643990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.644179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.644208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.644387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.644416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-07-15 16:08:34.644561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-07-15 16:08:34.644596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.644751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.644776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.644992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.645021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.645165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.645193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.645346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.645374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.645579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.645604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.645783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.645811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.646008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.646037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.646191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.646221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.646402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.646427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.646572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.646599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.646770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.646798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.646947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.646975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.647196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.647221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.647383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.647429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.647632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.647660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.647802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.647830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.647994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.648020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.648156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.648199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.648365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.648393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.648602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.648627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.648786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.648811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.649024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.649054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.649216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.649244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.649404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.649433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.649632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.649657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.649819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.649844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.650025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.650058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.650216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.650244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.650394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.650419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.650570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.650612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.650789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.650817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.650969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.650998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.651151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.651176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.651380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.651408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.651582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.651610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.651762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.651790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.651985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.652010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.652157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.652185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.652323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.652351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.652558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.652586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.652738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-07-15 16:08:34.652764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-07-15 16:08:34.652976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.653005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.653208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.653236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.653380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.653408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.653578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.653603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.653789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.653814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.654003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.654032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.654186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.654214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.654388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.654413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.654560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.654589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.654761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.654790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.654969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.654998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.655161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.655187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.655340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.655369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.655524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.655552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.655723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.655750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.655954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.655979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.656129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.656157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.656306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.656334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.656496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.656524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.656676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.656701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.656853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.656885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.657093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.657121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.657259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.657288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.657469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.657496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.657700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.657729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.657867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.657904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.658054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.658082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.658281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.658307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.658543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.658591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.658768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.658796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.658978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.659007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.659191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.659216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.659422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.659450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.659620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.659647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.659789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.659818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.659981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.660007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.660186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.660214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.660356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.660384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.660529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.660557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.660715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.660743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.660948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.660997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-07-15 16:08:34.661161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-07-15 16:08:34.661189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.661393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.661422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.661592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.661617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.661751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.661776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.661952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.661978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.662134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.662162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.662320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.662346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.662544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.662572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.662718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.662746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.662920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.662949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.663151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.663176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.663345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.663393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.663569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.663597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.663792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.663820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.663970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.663995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.664159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.664201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.664368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.664396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.664591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.664619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.664805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.664830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.664995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.665021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.665237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.665262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.665468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.665497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.665655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.665680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.665840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.665865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.666038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.666066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.666223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.666251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.666439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.666464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.666644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.666671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.666849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.666885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.667040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.667068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.667226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.667252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.667395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.667421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.667563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.667588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.667767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.667794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.667996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.668022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.668177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.668204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-07-15 16:08:34.668381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-07-15 16:08:34.668411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.668580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.668608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.668761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.668786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.668968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.669016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.669196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.669221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.669424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.669453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.669626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.669651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.669775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.669818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.669976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.670005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.670173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.670201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.670409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.670434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.670602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.670652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.670822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.670850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.671051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.671077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.671225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.671250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.671390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.671416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.671631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.671659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.671865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.671903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.672081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.672106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.672302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.672351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.672499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.672528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.672726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.672754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.672903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.672937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.673116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.673144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.673343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.673371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.673511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.673540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.673697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.673723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.673854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.673885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.674052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.674079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.674236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.674263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.674446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.674475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.674633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.674657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.674812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.674841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.675042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.675068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.675231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.675256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.675413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.675438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.675611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.675648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.675834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.675860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.676049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.676075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.676205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.676230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.676369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.676394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.676582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-07-15 16:08:34.676610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-07-15 16:08:34.676783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.676808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.676987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.677015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.677163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.677191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.677334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.677362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.677544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.677568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.677742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.677770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.677944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.677972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.678145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.678172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.678327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.678352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.678477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.678517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.678693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.678721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.678931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.678957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.679114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.679139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.679396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.679446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.679651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.679679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.679831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.679859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.680064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.680090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.680251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.680276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.680458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.680486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.680659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.680686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.680858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.680890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.681069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.681096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.681249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.681277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.681480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.681508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.681683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.681708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.681895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.681934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.682082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.682110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.682286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.682314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.682464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.682489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.682598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f40e0 is same with the state(5) to be set 00:27:07.948 [2024-07-15 16:08:34.682854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.682910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.683097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.683127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.683311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.683337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.683550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.683579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.683783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.683812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.684003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.684030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.684220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.684249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.684428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.684458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.684639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.684665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.684838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.684868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-07-15 16:08:34.685084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-07-15 16:08:34.685113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.685306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.685332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.685509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.685538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.685718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.685747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.685930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.685957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.686110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.686147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.686327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.686356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.686558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.686584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.686742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.686772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.686923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.686952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.687144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.687170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.687382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.687410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.687597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.687624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.687789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.687815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.687968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.687998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.688175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.688204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.688387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.688417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.688598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.688627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.688801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.688830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.689018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.689045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.689260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.689289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.689460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.689489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.689671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.689697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.689852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.689884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.690075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.690105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.690290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.690315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.690497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.690526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.690727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.690755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.690924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.690950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.691147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.691189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.691380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.691409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.691598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.691623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.691781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.691805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.691982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.692010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.692189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.692213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.692416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.692443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.692636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.692665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.692870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.692904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.693069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.693097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.693275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.693302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.693472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-07-15 16:08:34.693498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-07-15 16:08:34.693621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.693645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.693839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.693867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.694037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.694068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.694254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.694282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.694458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.694487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.694635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.694661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.694806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.694847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.695038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.695064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.695199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.695223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.695363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.695389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.695549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.695591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.695736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.695761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.695918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.695944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.696101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.696127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.696283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.696308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.696491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.696519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.696697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.696726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.696884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.696910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.697054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.697079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.697245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.697270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.697435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.697460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.697643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.697671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.697842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.697870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.698067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.698093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.698252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.698281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.698424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.698452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.698640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.698666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.698827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.698853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.698988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-07-15 16:08:34.699014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-07-15 16:08:34.699170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.699199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.699335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.699360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.699515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.699541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.699676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.699701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.699826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.699868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.700066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.700091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.700253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.700278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.700524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.700574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.700730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.700758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.700917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.700943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.701081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.701123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.701263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.701291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.701447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.701472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.701597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.701638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.701819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.701848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.702012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.702038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.702230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.702258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.702440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.702467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.702603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.702628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.702788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.702814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.702981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.703010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.703167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.703193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.703374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.703402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.703607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.703635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.703813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.703838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.704053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.704082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.704257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.704285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.704462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.704491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.704787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.704840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.705024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.705055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.705240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.705265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.705451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.705479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.705633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.705661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.705809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.705834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.706022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.706051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.706231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.706258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.706441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-07-15 16:08:34.706466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-07-15 16:08:34.706646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.706699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.706841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.706869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.707044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.707070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.707225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.707267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.707453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.707481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.707662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.707687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.707873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.707913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.708093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.708118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.708284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.708308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.708457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.708482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.708631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.708659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.708808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.708833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.709029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.709057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.709232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.709260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.709470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.709497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.709679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.709707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.709891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.709921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.710106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.710133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.710370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.710396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.710589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.710615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.710752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.710779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.710940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.710967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.711129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.711173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.711379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.711405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.711614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.711642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.711828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.711857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.712051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.712077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.712254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.712279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.712466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.712494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.712653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.712679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.712837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.712862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.713025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.713051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.713189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.713214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.713369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.713397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.713591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.713616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.713771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.713796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.713936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.713962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.714103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.714129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.714292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.714318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-07-15 16:08:34.714504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-07-15 16:08:34.714533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.714714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.714744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.714931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.714957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.715115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.715157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.715365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.715390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.715592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.715616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.715829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.715857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.716059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.716085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.716257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.716283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.716470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.716497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.716686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.716714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.716898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.716924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.717081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.717107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.717263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.717291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.717476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.717501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.717688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.717716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.717891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.717920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.718066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.718091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.718218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.718264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.718482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.718515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.718678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.718706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.718866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.718915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.719080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.719106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.719308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.719334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.719508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.719536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.719681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.719711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.719933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.719959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.720120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.720145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.720337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.720366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.720527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.720552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.720735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.720763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.720993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.721020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.721159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.721185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.721351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.721377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.721596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.721626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.721764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.721790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.721928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.721954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.722091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.722116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-07-15 16:08:34.722298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-07-15 16:08:34.722323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.722509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.722537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.722686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.722715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.722874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.722907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.723090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.723116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.723303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.723332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.723483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.723508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.723677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.723719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.723869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.723924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.724093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.724119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.724325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.724353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.724518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.724543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.724700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.724726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.724891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.724935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.725091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.725117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.725240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.725265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.725427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.725452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.725645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.725670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.725814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.725839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.726016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.726043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.726183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.726208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.726379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.726412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.726598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.726627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.726801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.726830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.727011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.727037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.727170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.727195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.727369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.727398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.727614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.727655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.727816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.727844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.728006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.728033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.728163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.728188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.728373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.728405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.728583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.728612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.728784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.728811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.728975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.729001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.729126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.729171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.729345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.729414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.729602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.729627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.729784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.729810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.729943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.729969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.730101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.730126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.730258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.730299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.730491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-07-15 16:08:34.730526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-07-15 16:08:34.730699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.730724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.730863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.730895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.731064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.731089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.731252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.731277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.731452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.731480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.731654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.731680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.731840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.731866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.732032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.732057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.732243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.732271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.732446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.732472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.732614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.732639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.732795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.732820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.732987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.733014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.733143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.733169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.733379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.733406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.733542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.733567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.733703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.733728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.733867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.733898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.734021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.734046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.734205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.734246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.734485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.734513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.734659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.734684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.734838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.734863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.735037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.735063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.735212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.735237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.735391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.735419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.735582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.735609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.735780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.735805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.735972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.735997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.736162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.736188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.736321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.736347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.736500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.736525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-07-15 16:08:34.736734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-07-15 16:08:34.736762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.736935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.736961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.737097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.737121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.737306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.737334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.737523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.737549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.737710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.737738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.737918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.737970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.738104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.738129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.738258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.738283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.738509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.738557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.738766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.738797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.738994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.739020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.739144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.739168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.739349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.739374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.739581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.739609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.739784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.739812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.739994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.740019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.740198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.740239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.740422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.740447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.740584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.740609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.740737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.740762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.740924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.740950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.741086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.741111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.741245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.741271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.741463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.741488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.741646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.741671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.741794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.741834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.742025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.742051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.742185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.742216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.742383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.742410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.742628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.742657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.742810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.742835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.743962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.743994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.744164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.744191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.744373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.744398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.744591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.744619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.744770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.744798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.744978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.745004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.745143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.745184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.745321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.745349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.745527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.745553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.745707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.745735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.745919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.745961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.746095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.746121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.746320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.746345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.746615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.746667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-07-15 16:08:34.746832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-07-15 16:08:34.746858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.747002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.747029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.747233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.747262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.747439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.747465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.747610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.747638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.747783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.747811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.747994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.748020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.748153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.748179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.748307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.748333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.748496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.748525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.748674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.748703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.748905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.748953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.749079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.749105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.749281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.749309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.749520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.749571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.749730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.749755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.749907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.749937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.750095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.750119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.750278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.750303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.750483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.750511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.750698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.750726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.750870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.750903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.751038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.751063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.751282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.751310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.751488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.751513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.751689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.751718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.751889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.751944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.752109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.752135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.752282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.752311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.752480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.752509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.752686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.752711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.752866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.752947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.753159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.753187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.753382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.753407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.753580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.753608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.753748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.753776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.753957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.753987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.754162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.754190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.754374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.754402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.754593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.754618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.754773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.754803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.754949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.754978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-07-15 16:08:34.755161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-07-15 16:08:34.755186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.755328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.755353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.755476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.755502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.755674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.755699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.755887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.755916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.756119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.756150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.756328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.756354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.756481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.756522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.756683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.756711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.756870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.756902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.757059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.757085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.757268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.757296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.757468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.757493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.757670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.757697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.757836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.757865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.758083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.758108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.758273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.758299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.758471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.758499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.758654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.758678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.758855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.758891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.759097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.759126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.759284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.759310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.759493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.759521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.759702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.759727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.759911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.759937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.760117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.760145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.760287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.760315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.760518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.760544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.760723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.760751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.760936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.760965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.761143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.761168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.761340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.761368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.761570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.761599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.761814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.761839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.762031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.762060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.762242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.762274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.762484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.762509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.762690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.762718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.762897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.762937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.763091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.763116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.763282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.763324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.763487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.763515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.763703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.763729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.763906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.763937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.764115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.764143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.764293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.764319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.764490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.764518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.764659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.764688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.764942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.764968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.765101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-07-15 16:08:34.765126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-07-15 16:08:34.765314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.765341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.765507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.765532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.765682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.765710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.765899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.765941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.766095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.766120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.766295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.766323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.766471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.766499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.766677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.766702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.766866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.766903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.767043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.767069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.767255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.767281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.767460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.767488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.767666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.767698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.767906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.767939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.768090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.768119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.768268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.768296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.768500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.768525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.768647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.768672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.768807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.768832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.769007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.769034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.769168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.769193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.769328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.769353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.769525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.769552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.769718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.769743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.769870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.769902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.770087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.770112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.770339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.770366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.770521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.770546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.770713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.770739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.770918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.770947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.771137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.771168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.771358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.771383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.771566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.771595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.771776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.771804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.771979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.772005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.772211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.772239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.772380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.772408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.772563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.772589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.772743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.772768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.772952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.772985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.773166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.773191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.773370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.773400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.773558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.773586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-07-15 16:08:34.773765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-07-15 16:08:34.773791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.774015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.774044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.774212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.774240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.774448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.774473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.774650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.774678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.774853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.774888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.775073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.775098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.775247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.775275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.775413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.775441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.775631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.775656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.775834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.775863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.776059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.776087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.776298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.776324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.776507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.776535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.776679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.776707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.776868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.776907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.777121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.777163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.777316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.777344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.777496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.777521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.777651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.777676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.777854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.777891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.778065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.778090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.778275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.778303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.778442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.778470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.778653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.778678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.778856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.778893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.779078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.779104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.779276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.779301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.779455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.779482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.779616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.779644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.779800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.779825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.779994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.780023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.780172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.780201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.780384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.780409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.780566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.780591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.780761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.780788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.780980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.781007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.781146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.781172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.781383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.781411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.781615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.781640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.781847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.781883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.782075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.782100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.782254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.782280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.782492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.782520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.782695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.782723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.782907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.782932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.783111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.783139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.783318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.783348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.783525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.783551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.783685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.783710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.783870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.783904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.784129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.784157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.784336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.784365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.784566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-07-15 16:08:34.784594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-07-15 16:08:34.784784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.784809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.784967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.784994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.785141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.785169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.785379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.785404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.785613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.785641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.785824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.785849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.785983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.786008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.786150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.786193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.786344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.786374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.786562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.786587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.786788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.786820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.786988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.787014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.787204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.787230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.787410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.787439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.787577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.787607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.787795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.787820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.788000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.788029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.788205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.788233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.788418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.788444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.788625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.788653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.788839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.788867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.789048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.789073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.789226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.789254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.789426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.789454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.789637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.789662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.789868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.789905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.790112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.790140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.790297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.790323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.790500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.790528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.790730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.790781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.790967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.790993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.791200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.791228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.791375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.791403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.791585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.791610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.791750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.791775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.791933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.791962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.792145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.792172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.792318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.792350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.792522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.792550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.792732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.792758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.792962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.792991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.793141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.793169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.793345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.793370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.793503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.793528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.793668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.793695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.793871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.793903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.794030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.794056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.794251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.794279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.794464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.794489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.794663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.794692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.794863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.794900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.795118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.795144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.795326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.795354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.795521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.795548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-07-15 16:08:34.795728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-07-15 16:08:34.795754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.795955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.795985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.796158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.796186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.796345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.796372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.796535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.796577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.796750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.796779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.796956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.796983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.797159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.797187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.797394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.797422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.797595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.797621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.797806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.797838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.798031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.798061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.798213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.798238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.798388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.798416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.798566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.798594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.798736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.798762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.798967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.798996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.799153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.799182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.799336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.799361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.799524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.799568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.799762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.799787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.799920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.799946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.800104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.800132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.800299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.800327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.800510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.800536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.800738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.800766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.800914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.800950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.801134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.801159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.801336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.801364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.801552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.801580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.801793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.801818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.801995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.802024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.802199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.802228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.802411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.802437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.802621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.802650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.802829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.802854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.803013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.803039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.803188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.803216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.803393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.803422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.803568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.803594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.803798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.803827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.804009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.804035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.804221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.804246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.804396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.804424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.804596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.804624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.804796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.804822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.804974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.805000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.805122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-07-15 16:08:34.805148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-07-15 16:08:34.805342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.805367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.805542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.805570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.805714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.805742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.805930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.805960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.806142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.806170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.806349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.806378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.806578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.806603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.806755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.806783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.806963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.806992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.807174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.807199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.807381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.807409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.807582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.807610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.807814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.807839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.808033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.808062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.808212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.808240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.808421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.808446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.808634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.808662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.808872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.808909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.809053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.809078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.809259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.809287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.809457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.809485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.809669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.809694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.809872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.809909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.810091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.810119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.810296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.810322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.810489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.810518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.810722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.810750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.810951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.810977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.811163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.811191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.811366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.811391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.811554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.811584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.811729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.811757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.811906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.811945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.812147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.812172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.812350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.812379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.812582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.812610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.812784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.812809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.812968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.812994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.813196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.813224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.813399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.813424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.813602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.813631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.813811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.813839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.814015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.814040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.814204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.814230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.814393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.814436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.814588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.814613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.814744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.814769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.814941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.814967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.815156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.815181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.815362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.815390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.815563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.815591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-07-15 16:08:34.815772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-07-15 16:08:34.815797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.815977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.816011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.816191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.816219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.816367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.816393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.816523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.816550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.816708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.816737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.816945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.816975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.817155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.817183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.817356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.817383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.817536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.817562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.817741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.817769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.817942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.817971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.818171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.818196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.818350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.818378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.818542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.818570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.818775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.818800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.818947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.818976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.819163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.819188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.819374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.819400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.819611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.819639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.819821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.819849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.820056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.820081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.820290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.820318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.820467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.820495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.820662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.820687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.820857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.820892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.821038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.821066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.821221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.821246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.821430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.821455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.821665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.821694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.821844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.821869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.822062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.822090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.822261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.822289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.822499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.822525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.822681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.822710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.822920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.822948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.823127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.823152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.823319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.823348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.823526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.823554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.823711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.823737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.823896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.823936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.824136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.824164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.824337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.824362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.824524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.824548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.824679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.824704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.824841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.824867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.825061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.825089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.825258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.825301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.825516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.825543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.825757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.825786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.825938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.825968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.826127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.826152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.826334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.826362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.826620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.826669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.826856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.826888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.827082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.827108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.827344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.827396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-07-15 16:08:34.827577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-07-15 16:08:34.827602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.827755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.827783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.827966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.827996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.828179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.828208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.828416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.828444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.828686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.828738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.828916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.828942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.829099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.829127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.829303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.829331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.829487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.829512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.829692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.829719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.829889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.829932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.830095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.830120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.830273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.830301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.830475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.830501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.830654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.830679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.830862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.830899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.831091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.831130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.831276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.831303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.831490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.831519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.831839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.831903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.832110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.832136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.832287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.832315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.832540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.832588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.832773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.832798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.832936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.832962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.833162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.833190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.833343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.833368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.833505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.833531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.833762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.833788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.833947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.833978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.834162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.834190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.834339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.834367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.834549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.834575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.834749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.834777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.834935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.834977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.835142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.835167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.835348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.835376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.835552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.835580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.835766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.835793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.835971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.836001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.836179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.836207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.836392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.836418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.836551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.836593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.836798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.836826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.837022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.837048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.837219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.837247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-07-15 16:08:34.837448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-07-15 16:08:34.837476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.837636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.837661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.837825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.837850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.838020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.838045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.838210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.838235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.838446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.838474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.838719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.838768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.838946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.838973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.839156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.839184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.839373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.839398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.839571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.839596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.839774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.839801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.839951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.839980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.840188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.840214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.840362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.840391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.840644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.840689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.840885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.840911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.841097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.841122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.841308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.841336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.841516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.841541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.841722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.841749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.841928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.841954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.842112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.842136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.842311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.842343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.842558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.842611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.842824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.842850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.843017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.843042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.843195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.843223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.843431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.843457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.843636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.843664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.843845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.843873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.844086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.844111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.844249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.844274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.844435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.844460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.844621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.844645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.844859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.844899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.845048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.845075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.845206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.845231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.845412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.845441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.845761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.845814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.846017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.846042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.846184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.846209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.846362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.846390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.846576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.846601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.846786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.846814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.846973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.847002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.847189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.847214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-07-15 16:08:34.847418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-07-15 16:08:34.847446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.847650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.847676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.847810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.847836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.848000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.848027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.848174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.848201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.848382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.848407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.848545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.848572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.848741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.848768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.848955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.848981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.849143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.849185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.849362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.849390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.849544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.849569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.849710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.849735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.849870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.849902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.850041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.850066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.850238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.850265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.850429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.850461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.850643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.850668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.850819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.850847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.851045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.851071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.851211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.851237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.851391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.851416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.851659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.851710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.851858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.851888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.276 [2024-07-15 16:08:34.852029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.276 [2024-07-15 16:08:34.852053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.276 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.852233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.852262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.852449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.852475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.852662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.852691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.852867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.852901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.853074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.853099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.853279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.853307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.853529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.853554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.853685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.853710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.853937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.853963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.854085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.854111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.854296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.854321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.854490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.854518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.854713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.854780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.854959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.854984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.855144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.855186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.855342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.855370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.855575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.855600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.855804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.855831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.856002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.856028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.856192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.856217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.856400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.856428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.856603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.856631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.856780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.856805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.856951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.856986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.857144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.857185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.857337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.857363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.857487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.857529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.857702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.857729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.857874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.857907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.858045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.858089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.858290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.858318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.858526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.858555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.858766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.858791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.858926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.858952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.859136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.859161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.859367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.859395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.859540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.859568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.859754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.859779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.859920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.859946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.860127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.860152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.860310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.860336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.860489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.860532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.860686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.860714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.860871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.860904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.861062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.861105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.861307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.861335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.861516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.861541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.861724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.861752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.861904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.861943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.862137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.862162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.862376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.862404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.862545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.862574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.862732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.862759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.862899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.862941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.863122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.863147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.863335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.863360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.863517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.863546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.863756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.863782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.277 qpair failed and we were unable to recover it. 00:27:08.277 [2024-07-15 16:08:34.863981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.277 [2024-07-15 16:08:34.864006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.864136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.864162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.864345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.864371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.864566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.864591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.864799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.864827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.864975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.865004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.865155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.865181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.865386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.865415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.865563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.865591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.865767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.865792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.865957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.865983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.866117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.866142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.866303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.866328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.866476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.866508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.866662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.866690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.866887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.866923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.867077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.867102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.867282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.867310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.867498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.867523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.867727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.867755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.867957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.867986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.868141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.868166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.868345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.868372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.868522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.868551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.868705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.868730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.868872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.868904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.869056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.869084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.869293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.869318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.869483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.869508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.869665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.869690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.869822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.869847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.870028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.870057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.870264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.870292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.870472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.870496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.870676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.870704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.870881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.870909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.871118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.871143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.871294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.871323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.871495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.871523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.871684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.871710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.871864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.871902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.872053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.872081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.872266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.872291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.872452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.872477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.872642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.872670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.872844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.872869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.873025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.873055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.873255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.873284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.873466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.873491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.873658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.873686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.873853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.873887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.874045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.874072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.874251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.874280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.874457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.874482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.874644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.874669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.874851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.874885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.875040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.875068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.875206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.875232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.278 qpair failed and we were unable to recover it. 00:27:08.278 [2024-07-15 16:08:34.875352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.278 [2024-07-15 16:08:34.875378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.875562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.875590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.875745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.875770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.875933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.875958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.876143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.876171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.876349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.876373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.876560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.876585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.876711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.876751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.876915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.876941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.877074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.877101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.877281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.877309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.877487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.877512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.877643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.877668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.877827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.877852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.878059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.878086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.878268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.878296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.878466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.878494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.878671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.878696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.878865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.878900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.879074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.879102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.879283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.879309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.879470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.879495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.879630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.879659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.879793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.879819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.879980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.880006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.880180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.880208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.880386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.880411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.880534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.880576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.279 [2024-07-15 16:08:34.880725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.279 [2024-07-15 16:08:34.880755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.279 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.880936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.880962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.881145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.881175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.881352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.881380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.881528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.881553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.881726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.881754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.881929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.881957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.882135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.882160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.882319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.882347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.882486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.882513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.882685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.882710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.882891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.882920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.883129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.883154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.883342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.883367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.883556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.883585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.883761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.883789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.883943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.883968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.884133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.884159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.884352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.884380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.884559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.884584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.884762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.884790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.884941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.884970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.885180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.885205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.885384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.885411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.885589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.885616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.885770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.885796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.885975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.886004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.886144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.886172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.886349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.886373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.886525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.886553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.886692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.886720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.886868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.886899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.887063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.887088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.887244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.887270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.887455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.887484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.887700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.887728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.887867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.887901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.888063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.888089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.888268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.888293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.888469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.888497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.888650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.888675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.888835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.888860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.889068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.889096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.889281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.889306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.889461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.889486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.889687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.889712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.889866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.889909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.890037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.890062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.890215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.890240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.890366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.890391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.890571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.890599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.890756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.890784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.890990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.891016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.891201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.891229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.891375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.891403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.891558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.891583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.891715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.891740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.891954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.891982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.892165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.892190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.892340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.892368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.892541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.892569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.892787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.892813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.280 qpair failed and we were unable to recover it. 00:27:08.280 [2024-07-15 16:08:34.892988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.280 [2024-07-15 16:08:34.893016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.893218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.893246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.893432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.893457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.893604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.893636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.893815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.893844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.894022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.894048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.894199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.894232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.894412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.894440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.894612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.894638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.894772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.894815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.894990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.895019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.895197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.895223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.895406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.895439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.895614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.895644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.895813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.895843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.896037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.896063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.896245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.896271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.896431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.896457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.896624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.896667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.896835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.896864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.897038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.897064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.897202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.897229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.897415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.897441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.897582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.897607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.897772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.897797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.897993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.898018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.898147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.898173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.898346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.898375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.898544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.898572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.898752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.898778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.898907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.898949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.899155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.899183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.899336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.899361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.899567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.899594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.899734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.899762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.899952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.899978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.900129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.900157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.900306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.900335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.900513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.900539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.900726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.900754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.900921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.900949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.901155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.901179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.901322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.901349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.901522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.901551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.901733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.901758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.901909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.901938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.902117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.902145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.902326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.902351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.902526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.902554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.902730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.902758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.902940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.902966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.903086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.903111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.903302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.903331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.903463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.903490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.903669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.903697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.903845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.903873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.904064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.904090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.904237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.904266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.904464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.904492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.904669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.904694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.904868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.904904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.905080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.905107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.905281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.281 [2024-07-15 16:08:34.905307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.281 qpair failed and we were unable to recover it. 00:27:08.281 [2024-07-15 16:08:34.905461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.905489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.905686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.905714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.905922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.905948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.906150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.906178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.906356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.906384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.906565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.906590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.906767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.906795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.906964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.906992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.907172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.907198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.907374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.907402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.907606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.907634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.907806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.907831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.908023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.908052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.908253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.908281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.908427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.908452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.908630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.908658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.908833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.908861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.909053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.909078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.909221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.909249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.909419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.909447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.909623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.909648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.909826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.909854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.910016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.910042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.910194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.910219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.910383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.910408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.910558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.910584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.910741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.910767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.910890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.910916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.911093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.911121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.911305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.911336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.911512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.911540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.911719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.911748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.911925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.911950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.912133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.912161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.912310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.912338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.912560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.912585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.912766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.912793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.913013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.913039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.913174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.913199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.913376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.913404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.913582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.913609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.913816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.913842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.914041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.914070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.914252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.914280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.914435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.914460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.914656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.914681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.914898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.914927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.915114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.915139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.915302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.915327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.915506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.915535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.915710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.915736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.915867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.915916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.916089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.916117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.916269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.916294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.916450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.916475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.916664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.916692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.916874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.916910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.917091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.917119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.282 [2024-07-15 16:08:34.917289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.282 [2024-07-15 16:08:34.917317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.282 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.917520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.917545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.917724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.917752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.917964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.917992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.918146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.918171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.918378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.918406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.918579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.918609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.918811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.918839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.919034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.919060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.919274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.919301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.919479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.919504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.919696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.919728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.919923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.919949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.920105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.920130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.920276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.920301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.920459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.920485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.920673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.920698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.920855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.920902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.921074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.921103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.921285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.921310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.921460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.921487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.921685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.921713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.921903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.921930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.922086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.922115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.922338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.922366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.922519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.922545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.922748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.922776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.922941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.922972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.923141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.923176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.923349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.923379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.923550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.923578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.923761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.923786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.923932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.923958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.924115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.924140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.924296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.924321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.924501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.924529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.924675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.924703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.924915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.924958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.925128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.925170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.925310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.925338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.925521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.925546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.925706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.925731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.925861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.925894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.926066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.926091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.926273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.926301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.926504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.926532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.926684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.926710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.926932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.926961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.927133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.927161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.927344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.927369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.927556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.927583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.927784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.927816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.928010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.928037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.928192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.928221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.928419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.928448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.928624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.928649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.928829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.928857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.929011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.929039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.929222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.929247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.929431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.929459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.929642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.929671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.929889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.929915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.930096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.930124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.930308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.930336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.930522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.930548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.930724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.930753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.930924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.930954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.283 [2024-07-15 16:08:34.931163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.283 [2024-07-15 16:08:34.931189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.283 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.931337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.931367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.931539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.931567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.931762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.931791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.931944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.931970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.932129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.932153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.932323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.932349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.932523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.932551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.932750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.932778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.932964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.932990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.933158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.933184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.933373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.933402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.933578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.933603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.933729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.933773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.933958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.933987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.934167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.934192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.934348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.934373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.934506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.934531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.934721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.934746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.934915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.934940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.935089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.935114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.935274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.935299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.935445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.935473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.935674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.935701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.935902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.935931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.936106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.936134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.936282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.936311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.936464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.936489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.936629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.936672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.936884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.936912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.937095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.937120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.937258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.937283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.937439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.937464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.937622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.937649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.937835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.937863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.938016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.938044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.938230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.938256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.938445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.938473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.938626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.938654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.938833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.938858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.939017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.939045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.939224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.939252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.939396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.939422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.939626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.939654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.939803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.939833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.940019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.940045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.940177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.940219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.940427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.940452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.940581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.940606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.940735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.940760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.940945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.940973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.941120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.941145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.941282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.941306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.941453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.941480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.941664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.941688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.941818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.941860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.942073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.942101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.942251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.942276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.942434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.942477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.942644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.942672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.942859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.942890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.943046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.943074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.943251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.943279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.943453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.943478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.943603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.943649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.284 qpair failed and we were unable to recover it. 00:27:08.284 [2024-07-15 16:08:34.943846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.284 [2024-07-15 16:08:34.943875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.944077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.944102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.944287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.944315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.944456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.944485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.944659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.944685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.944867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.944902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.945107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.945132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.945287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.945312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.945461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.945489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.945658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.945685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.945835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.945860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.946028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.946053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.946207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.946232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.946417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.946442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.946592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.946620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.946760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.946788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.946952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.946977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.947141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.947166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.947371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.947399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.947559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.947584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.947739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.947764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.947961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.947986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.948141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.948166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.948342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.948369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.948518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.948546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.948716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.948741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.948925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.948955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.949131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.949160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.949333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.949358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.949497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.949522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.949656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.949683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.949846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.949871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.950072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.950100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.950284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.950311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.950463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.950487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.950659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.950687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.950904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.950929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.951059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.951084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.951242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.951267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.951400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.951431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.951591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.951616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.951744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.951769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.951964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.951992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.952151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.952187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.952351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.952377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.952527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.952554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.952734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.952759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.952958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.952986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.953140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.953169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.953344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.953369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.953504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.953546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.953722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.953749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.953936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.953962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.954098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.954124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.954255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.954280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.954481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.285 [2024-07-15 16:08:34.954506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.285 qpair failed and we were unable to recover it. 00:27:08.285 [2024-07-15 16:08:34.954677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.954705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.954906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.954935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.955111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.955136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.955298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.955323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.955527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.955555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.955730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.955755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.955933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.955961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.956131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.956159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.956310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.956334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.956461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.956502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.956679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.956707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.956914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.956939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.957110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.957137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.957344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.957371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.957553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.957578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.957763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.957790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.957940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.957968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.958147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.958172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.958321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.958349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.958520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.958547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.958729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.958754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.958935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.958963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.959118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.959146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.959324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.959353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.959535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.959562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.959701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.959729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.959974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.960000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.960129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.960155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.960341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.960369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.960542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.960567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.960713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.960741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.960941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.960968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.961098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.961124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.961299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.961327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.961504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.961532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.961688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.961715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.961884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.961927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.962074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.962102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.962281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.962306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.962480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.962507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.962674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.962702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.962856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.962887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.963046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.963088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.963242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.963269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.963449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.963474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.963655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.963682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.963823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.963851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.964024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.964051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.964231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.964259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.964434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.964462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.964643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.964669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.964871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.964907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.965056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.965084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.965231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.965257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.965385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.965427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.965594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.965621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.965769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.965794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.965939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.965982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.966154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.966181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.966327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.966351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.966508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.966551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.966741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.966766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.966950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.966975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.967119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.967150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.967355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.967383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.967537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.967562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.967694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.967719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.967934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.967960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.968084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.968109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.968287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.968314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.968482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.968510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.968701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.968726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.968886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.968914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.969060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.969088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.969234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.969259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.969397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.969422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.969606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-07-15 16:08:34.969631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.286 qpair failed and we were unable to recover it. 00:27:08.286 [2024-07-15 16:08:34.969821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.969846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.969995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.970020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.970145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.970170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.970352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.970377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.970506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.970549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.970727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.970752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.970910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.970935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.971101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.971129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.971300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.971328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.971481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.971505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.971635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.971675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.971841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.971868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.972066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.972093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.972268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.972300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.972478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.972506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.972662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.972687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.972848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.972873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.973037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.973062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.973229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.973254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.973401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.973429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.973603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.973631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.973816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.973842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.974012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.974038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.974166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.974191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.974318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.974343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.974519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.974547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.974718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.974745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.974938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.974964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.975100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.975143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.975347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.975374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.975559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.975584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.975765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.975794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.975974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.976003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.976178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.976203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.976335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.976378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.976579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.976603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.976730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.976755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.976893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.976935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.977069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.977097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.977285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.977311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.977489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.977517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.977653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.977681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.977859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.977890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.978044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.978072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.978213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.978241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.978399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.978425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.978611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.978636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.978810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.978838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.978999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.979025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.979208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.979236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.979392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.979420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.979567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.979594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.979720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.979746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.979943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.979975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.980132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.980158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.980328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.980356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.980516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.980541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.980728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.980753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.980908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-07-15 16:08:34.980937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.287 qpair failed and we were unable to recover it. 00:27:08.287 [2024-07-15 16:08:34.981074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.981102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.981315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.981340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.981515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.981543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.981744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.981772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.981953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.981979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.982186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.982214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.982364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.982393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.982547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.982572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.982702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.982743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.982892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.982921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.983070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.983095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.983222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.983247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.983466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.983494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.983639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.983664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.983808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.983832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.984000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.984026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.984158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.984184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.984312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.984338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.984521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.984550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.984729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.984756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.984946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.984974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.985160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.985188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.985402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.985427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.985577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.985606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.985773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.985801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.985953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.985979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.986134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.986181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.986356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.986383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.986558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.986582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.986725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.986753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.986931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.986960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.987132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.987157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.987286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.987312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.987483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.987511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.987667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.987706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.987859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.987894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.988062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.988090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.988272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.988299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.988448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.988477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.988651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.988679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.988850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.988886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.989063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.989088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.989273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.989298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.989455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.989481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.989662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.989690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.989892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.989918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.990077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.990103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.990317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.990345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.990510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.990538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.990712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.990737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.990869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.990901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.991037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.991062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.991219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.991244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.991380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.991408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.991579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.991607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.991780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.991807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.991981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.992010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.992186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.992215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.992430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.992456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.992630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.992658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.992806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.992834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.993032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.993057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.993233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.993263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.993470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.993499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.993681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.993706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.993889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.993918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.994057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.994085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.994233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.994258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.994414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.994455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.994631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.994658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.994849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.994884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.995064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.995089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.995252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.995277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.995403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.995428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.995631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.995663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.995799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.995827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.996031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.996057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.288 [2024-07-15 16:08:34.996190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-07-15 16:08:34.996217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.288 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.996367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.996392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.996547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.996572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.996720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.996748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.996905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.996934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.997098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.997124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.997322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.997347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.997549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.997575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.997765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.997790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.997974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.998002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.998200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.998228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.998381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.998406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.998577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.998604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.998771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.998799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.998978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.999003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.999186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.999214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.999364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.999392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.999568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.999593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.999731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.999756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:34.999922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:34.999948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.000072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.000097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.000277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.000305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.000476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.000504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.000685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.000710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.000849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.000874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.001018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.001044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.001227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.001252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.001426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.001453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.001637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.001662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.001850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.001882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.002059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.002087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.002238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.002266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.002476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.002501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.002713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.002740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.002917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.002946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.003124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.003150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.003304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.003332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.003532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.003564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.003744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.003769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.003910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.003937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.004096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.004139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.004296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.004321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.004497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.004525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.004701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.004731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.004887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.004912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.005048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.005074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.005244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.005269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.005426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.005451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.005639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.005668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.005848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.005883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.006065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.006091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.006239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.006265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.006420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.006445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.006599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.006624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.006798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.006826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.007033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.007058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.007248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.007274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.007426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.007454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.007633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.007661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.007835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.007860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.008049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.008078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.008277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.008306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.008488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.008513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.008683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.008708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.008900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.008929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.009110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.009136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.009283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.009311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.009470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.009495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.009682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.009706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.009914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.009943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.010117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.010145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.010349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.010374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.010539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.010565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.010741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.010770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.010925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.010951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.011108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.011151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.011326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.011356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.011541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.011570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.289 [2024-07-15 16:08:35.011776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.289 [2024-07-15 16:08:35.011804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.289 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.011994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.012023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.012173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.012198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.012375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.012403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.012545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.012572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.012756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.012782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.012908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.012934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.013136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.013163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.013346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.013371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.013504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.013547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.013717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.013745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.013942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.013969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.014148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.014176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.014351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.014379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.014552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.014578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.014710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.014735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.014895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.014943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.015131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.015156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.015355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.015381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.015572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.015600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.015748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.015773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.015918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.015962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.016142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.016170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.016326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.016352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.016522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.016550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.016691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.016719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.016905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.016931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.017105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.017132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.017304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.017332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.017515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.017541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.017682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.017711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.017887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.017916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.018121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.018146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.018333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.018361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.018532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.018560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.018738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.018765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.018913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.018940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.019109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.019153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.019348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.019373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.019514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.019546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.019691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.019718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.019939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.019965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.020123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.020153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.020310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.020338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.020517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.020543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.020750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.020778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.020979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.021007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.021157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.021182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.021358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.021386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.021561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.021588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.021767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.021792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.022005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.022033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.022238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.022266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.022446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.022472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.022622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.022650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.022827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.022855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.023079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.023104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.023280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.023305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.023468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.023493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.023676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.023701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.023905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.023934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.024072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.024100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.024280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.024305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.024444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.024469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.024646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.024670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.290 qpair failed and we were unable to recover it. 00:27:08.290 [2024-07-15 16:08:35.024794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.290 [2024-07-15 16:08:35.024818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.024957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.024998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.025175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.025205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.025348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.025373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.025509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.025534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.025722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.025747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.025911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.025938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.026144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.026173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.026346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.026374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.026556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.026583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.026764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.026792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.026972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.027001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.027184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.027209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.027392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.027420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.027593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.027626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.027808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.027833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.028027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.028055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.028241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.028269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.028413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.028438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.028569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.028610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.028799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.028824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.028983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.029009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.029151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.029176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.029307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.029334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.029524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.029549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.029731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.029758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.029950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.029975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.030140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.030165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.030352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.030380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.030522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.030550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.030694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.030719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.030845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.030870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.031064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.031091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.031261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.031286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.031439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.031481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.031635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.031663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.031895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.031924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.032063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.032088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.032277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.032305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.032487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.032512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.032658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.032686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.032842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.032871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.033022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.033047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.033200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.033243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.033389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.033418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.033603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.033628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.033781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.033810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.033985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.034014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.034228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.034253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.034404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.034432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.034642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.034667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.034824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.034849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.035006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.035034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.035204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.035232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.035402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.035431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.035568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.035610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.035764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.035792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.035970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.035996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.036172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.036203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.036374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.036401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.036548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.036574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.036740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.036766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.036920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.036955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.037130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.037158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.037328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.037353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.037486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.037511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.037673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.037699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.037828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.037853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.038026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.038052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.038195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.038221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.038360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.038385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.038552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.038577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.038707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.038732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.038857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.038889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.039054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.039080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.039236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.039262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.039397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.039422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.291 [2024-07-15 16:08:35.039589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.291 [2024-07-15 16:08:35.039614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.291 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.039745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.039772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.039908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.039934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.040073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.040099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.040233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.040259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.040424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.040449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.040610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.040635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.040790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.040814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.040983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.041009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.041142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.041167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.041335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.041360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.041521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.041546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.041704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.041729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.041914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.041939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.042076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.042101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.042263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.042288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.042430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.042456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.042587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.042616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.042812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.042837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.042990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.043017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.043155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.043181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.043316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.043341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.043501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.043527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.043691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.043716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.043882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.043909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.044070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.044095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.044260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.044285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.044424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.044449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.044609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.044634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.044784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.044809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.044954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.044981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.045126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.045152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.045298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.045324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.045460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.045485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.045649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.045674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.045811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.045836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.046006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.046032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.046171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.046196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.046354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.046381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.046513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.046539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.046692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.046717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.046900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.046926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.047062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.047087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.047215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.047240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.047410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.047435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.047573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.047598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.047786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.047811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.047955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.047980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.048106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.048131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.048296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.048321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.048477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.048502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.048635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.048661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.048819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.048845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.048985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.049011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.049141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.049166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.049328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.049353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.049483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.049508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.049633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.049663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.049803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.049829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.049963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.049989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.050125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.050151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.050290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.050315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.050464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.050489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.050648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.050674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.050836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.050862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.051005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.051031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.051165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.051195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.051355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.051380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.051542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.051568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.051753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.051778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.051918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.051945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.292 qpair failed and we were unable to recover it. 00:27:08.292 [2024-07-15 16:08:35.052070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.292 [2024-07-15 16:08:35.052096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.052254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.052280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.052472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.052497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.052620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.052646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.052775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.052800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.052990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.053015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.053152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.053178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.053369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.053394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.053552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.053577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.053738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.053764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.053924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.053950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.054115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.054141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.054291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.054316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.054505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.054530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.054656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.054681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.054802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.054827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.054979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.055005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.055170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.055196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.055356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.055382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.055541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.055567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.055758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.055783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.055956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.055982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.056109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.056134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.056297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.056322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.056483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.056509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.056673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.056698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.056855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.056895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.057026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.057052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.057192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.057218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.057378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.057403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.057543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.057569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.057754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.057779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.057943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.057969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.058124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.058149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.058304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.058330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.058490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.058516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.058676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.058703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.058868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.058900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.059065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.059091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.059265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.059291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.059457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.059482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.059652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.059677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.059813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.059838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.059981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.060007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.060141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.060165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.060322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.060349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.060474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.060498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.060658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.060683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.060845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.060870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.061012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.061037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.061175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.061200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.061360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.061385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.061573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.061598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.061789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.061814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.061974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.062000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.062139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.062165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.062324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.062349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.062482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.062508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.062673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.062699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.062853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.062885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.063023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.063048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.063200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.063225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.063385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.063412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.063572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.063597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.063747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.063772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.063942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.063968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.064126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.064155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.064316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.064342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.064475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.064500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.064632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.064658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.064845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.064871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.065037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.065063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.065233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.065258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.065394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.065419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.065562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.065587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.065748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.065773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.065908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.293 [2024-07-15 16:08:35.065943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.293 qpair failed and we were unable to recover it. 00:27:08.293 [2024-07-15 16:08:35.066114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.066140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.066272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.066298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.066483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.066508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.066651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.066678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.066848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.066874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.067045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.067070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.067199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.067225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.067384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.067409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.067590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.067615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.067774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.067800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.067971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.067996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.068129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.068155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.068352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.068377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.068532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.068557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.068720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.068745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.068907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.068933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.069072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.069102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.069291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.069317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.069448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.069474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.069634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.069661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.069824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.069849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.070024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.070049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.070178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.070204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.070385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.070410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.070575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.070600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.070764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.070789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.070951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.070978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.071114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.071139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.071267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.071292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.071422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.071448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.071617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.071643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.071802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.071828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.071967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.071994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.072155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.072181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.072341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.072367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.072524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.072550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.072679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.072704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.072861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.072894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.073029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.073055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.073188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.073214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.073373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.073399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.073528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.073555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.073721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.073747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.073909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.073936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.074075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.074100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.074267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.074292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.074453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.074478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.074633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.074659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.074847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.074873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.075020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.075046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.075236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.075261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.075448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.075473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.075632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.075657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.075815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.075841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.076010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.076036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.076165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.076190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.076342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.076371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.076530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.076555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.076709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.076734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.076899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.076925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.077082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.077107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.077242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.077268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.077399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.077425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.077612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.077637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.077796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.077822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.077964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.077990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.078147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.078173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.078328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.078354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.294 qpair failed and we were unable to recover it. 00:27:08.294 [2024-07-15 16:08:35.078487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.294 [2024-07-15 16:08:35.078513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.078677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.078702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.078864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.078895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.079035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.079061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.079231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.079256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.079422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.079449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.079612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.079637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.079799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.079824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.079956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.079983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.080142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.080167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.080360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.080386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.080574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.080599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.080726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.080751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.080895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.080921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.081059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.081084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.081247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.081272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.081454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.081479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.081638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.081663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.081846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.081871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.082041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.082067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.082200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.082227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.082384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.082410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.082543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.082570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.082735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.082760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.082888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.082914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.083048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.083075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.083235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.083260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.083415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.083440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.083569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.083598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.083785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.083810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.083968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.083996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.084164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.084189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.084345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.084370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.084551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.084577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.084729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.084754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.084891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.084917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.085079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.085104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.085237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.085262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.085423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.085448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.085632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.085657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.085819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.085844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.086003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.086029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.086165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.086191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.086346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.086371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.086506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.086532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.086718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.086743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.086911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.086937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.087122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.087147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.087303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.087328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.087518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.087543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.087699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.087725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.087890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.087916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.088106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.088131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.088255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.088280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.088413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.088438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.088635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.088660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.088820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.088845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.089031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.089057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.089221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.089246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.089432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.089457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.089587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.089612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.089775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.089801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.089940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.089967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.090152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.090177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.090347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.090373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.090535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.090561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.090732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.090757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.090945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.090971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.091131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.091160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.091291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.091315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.091472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.091497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.091658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.091683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.091809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.091836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.092008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.092034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.092187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.092212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.092383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.295 [2024-07-15 16:08:35.092408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.295 qpair failed and we were unable to recover it. 00:27:08.295 [2024-07-15 16:08:35.092595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.092620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.092757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.092782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.092953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.092979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.093140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.093165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.093353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.093378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.093535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.093559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.093754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.093779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.093909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.093934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.094068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.094093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.094268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.094293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.094451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.094476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.094629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.094654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.094842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.094867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.095024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.095049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.095231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.095259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.095444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.095470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.095632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.095659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.095831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.095859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.096041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.096066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.096253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.096282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.096458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.096486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.096634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.096659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.096821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.096868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.097023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.097052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.097236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.097262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.097424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.097450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.097625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.097654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.097808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.097833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.098047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.098076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.098261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.098291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.098447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.098473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.098654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.098683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.098863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.098904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.099058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.099083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.099305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.099333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.099508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.099535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.099689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.099714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.099869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.099903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.100036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.100062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.100200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.100226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.100350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.100392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.100541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.100569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.100753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.100778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.100957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.100986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.101164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.101192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.101371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.101397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.101614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.101643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.101786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.101814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.101963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.101988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.102147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.102188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.102329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.102356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.102536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.102562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.102701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.102726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.102887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.102930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.103111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.103136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.103338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.103366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.103545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.103572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.103718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.103744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.103869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.103901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.104099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.104127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.104336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.104361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.104499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.104524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.104709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.104733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.104904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.104930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.105110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.105138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.105324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.105349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.105537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.105561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.105738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.105765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.105915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.105943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.106116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.106142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.106312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.106339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.106516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.106543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.106696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.106725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.106889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.106932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.107102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.107130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.107313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.107338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.107517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-07-15 16:08:35.107545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-07-15 16:08:35.107752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.107780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.107966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.107991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.108152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.108180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.108360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.108388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.108567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.108592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.108804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.108832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.109019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.109045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.109239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.109264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.109485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.109510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.109673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.109699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.109869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.109900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.110055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.110083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.110255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.110282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.110461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.110487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.110671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.110699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.110869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.110904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.111089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.111114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.111280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.111308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.111476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.111504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.111685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.111712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.111874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.111906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.112059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.112084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.112256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.112282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.112449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.112476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.112655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.112683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.112862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.112895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.113102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.113130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.113310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.113337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.113517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.113542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.113720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.113749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.113922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.113951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.114134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.114159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.114342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.114372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.114550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.114578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.114750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.114776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.114926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.114958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.115159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.115187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.115374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.115399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.115556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.115584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.115750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.115778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.115990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.116016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.116172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.116202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.116355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.116383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.116568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.116593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.116761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.116786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.116955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.116981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.117137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.117161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.117309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.117338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.117543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.117571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.117801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.117829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.118018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.118044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.118202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.118229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.118402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.118427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.118580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.118623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.118824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.118851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.119017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.119042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.119215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.119245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.119384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.119411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.119585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.119610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.119762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.119789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.119977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.120006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.120151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.120176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.120353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.120381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.120549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.120577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.120751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.120776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.120990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.121019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.121222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.121249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.121429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.121454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.121636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.121663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.121809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.121837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.122017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-07-15 16:08:35.122043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-07-15 16:08:35.122200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.122228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.122367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.122395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.122544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.122570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.122744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.122772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.122963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.122995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.123177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.123204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.123340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.123365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.123526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.123551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.123727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.123755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.123942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.123968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.124110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.124135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.124294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.124319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.124449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.124474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.124653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.124680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.124887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.124913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.125091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.125119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.125321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.125349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.125554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.125579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.125759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.125786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.125964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.125992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.126177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.126203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.126382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.126410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.126553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.126581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.126734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.126759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.126921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.126946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.127085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.127128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.127279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.127304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.127463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.127507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.127685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.127713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.127919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.127945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.128125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.128153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.128294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.128322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.128523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.128547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.128702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.128730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.128903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.128931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.129113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.129138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.129315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.129342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.129522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.129547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.129707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.129732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.129912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.129941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.130121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.130148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.130325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.130350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.130527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.130555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.130723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.130752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.130932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.130962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.131139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.131167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.131339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.131366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.131539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.131564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.131775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.131803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.131977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.132005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.132180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.132205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.132343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.132367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.132522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.132547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.132706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.132731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.132935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.132963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.133115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.133143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.133294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.133319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.133498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.133525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.133704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.133732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.133913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.133938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.134082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.134107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.134268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.134293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.134447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.134471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.134607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.134632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.134820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.134845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.135048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.135073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.135229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.135257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.135403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.135431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.135613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.135638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.135780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.135808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.136012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.136040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.136226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.136251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.136408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.136434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.136620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.136648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.136834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.136859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.137050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.137078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.137213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.137241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-07-15 16:08:35.137421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-07-15 16:08:35.137448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.137629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.137657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.137804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.137833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.138021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.138048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.138176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.138218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.138374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.138402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.138545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.138570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.138732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.138762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.138898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.138923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.139079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.139105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.139279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.139308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.139483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.139511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.139669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.139694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.139869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.139904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.140082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.140111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.140317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.140342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.140521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.140549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.140743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.140768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.140901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.140927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.141113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.141138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.141303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.141331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.141518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.141543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.141705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.141730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.141888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.141916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.142098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.142123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.142303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.142331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.142506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.142533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.142715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.142740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.142943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.142972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.143149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.143177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.143363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.143388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.143550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.143575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.143747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.143774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.143971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.143997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.144162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.144187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.144359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.144388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.144561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.144586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.144708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.144751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.144932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.144958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.145086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.145112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.145268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.145311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.145448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.145476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.145619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.145644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.145806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.145831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.145999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.146025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.146156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.146182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.146357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.146386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.146586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.146618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.146800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.146825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.146980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.147006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.147167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.147192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.147322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.147348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.147527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.147555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.147728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.147755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.147916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.147942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.148145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.148173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.148316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.148344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.148531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.148556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.148732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.148760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.148931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.148959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.149105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.149130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.149338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.149366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.149518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.149545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.149732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.149758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.149940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.149969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.150147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.150175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.150345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.150370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.150549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.150578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.150756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.150784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.150990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.151016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.151162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.151190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-07-15 16:08:35.151392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-07-15 16:08:35.151420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.151566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.151591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.151746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.151788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.151943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.151973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.152162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.152187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.152397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.152425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.152573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.152601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.152809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.152837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.153011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.153038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.153196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.153234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.153394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.153419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.153555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.153601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.153774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.153801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.153977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.154003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.154131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.154157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.154331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.154358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.154542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.154572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.154732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.154757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.154946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.154972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.155160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.155185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.155340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.155367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.155553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.155578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.155710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.155735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.155925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.155954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.156128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.156157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.156303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.156328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.156508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.156535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.156684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.156713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.156891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.156917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.157050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.157075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.157237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.157262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.157429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.157454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.300 [2024-07-15 16:08:35.157607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.300 [2024-07-15 16:08:35.157636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.300 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.157819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.157848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.158006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.158033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.158184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.158228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.158396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.158423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.158601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.158626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.158793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.158820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.159017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.159043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.159208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.159233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.159417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.159446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.159626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.159654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.159806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.159832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.160000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.160026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.160212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.160238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.160435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.160461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.160644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.160672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.160847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.160883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.161037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.161062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.161193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.161234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.161375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.161403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.161574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.161599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.161770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.161798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.161947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.585 [2024-07-15 16:08:35.161976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.585 qpair failed and we were unable to recover it. 00:27:08.585 [2024-07-15 16:08:35.162223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.162248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.162414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.162446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.162614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.162643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.162797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.162822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.162985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.163010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.163195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.163222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.163426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.163451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.163634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.163662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.163898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.163924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.164082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.164107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.164252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.164280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.164455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.164482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.164626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.164651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.164786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.164832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.165056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.165081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.165215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.165241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.165422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.165450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.165619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.165647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.165795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.165820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.166006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.166032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.166221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.166249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.166452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.166477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.166653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.166681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.166817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.166845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.167037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.167063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.167256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.167284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.167438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.167467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.167617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.167644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.167837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.167865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.168056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.168082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.168236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.168261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.168437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.168466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.168607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.168635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.168789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.168814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.168981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.169006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.169160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.169188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.169397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.169422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.169603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.169632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.169808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.169836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.586 qpair failed and we were unable to recover it. 00:27:08.586 [2024-07-15 16:08:35.170012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.586 [2024-07-15 16:08:35.170038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.170225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.170250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.170453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.170478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.170638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.170663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.170836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.170864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.171043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.171071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.171224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.171248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.171409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.171434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.171638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.171666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.171840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.171865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.172054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.172082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.172257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.172285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.172465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.172490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.172637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.172665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.172814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.172843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.173046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.173071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.173252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.173280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.173456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.173484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.173626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.173652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.173811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.173856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.174015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.174043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.174217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.174242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.174444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.174472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.174672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.174700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.174848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.174873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.175060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.175088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.175261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.175289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.175498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.175523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.175707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.175736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.175954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.175987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.176132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.176157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.176287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.176328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.176492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.176520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.176661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.176704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.176871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.176902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.177058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.177084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.177213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.177240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.587 [2024-07-15 16:08:35.177413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.587 [2024-07-15 16:08:35.177442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.587 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.177627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.177656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.177808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.177833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.178016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.178045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.178222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.178250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.178430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.178455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.178644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.178671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.178818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.178846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.179003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.179028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.179215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.179242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.179414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.179442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.179624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.179649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.179784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.179809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.180017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.180046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.180195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.180220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.180373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.180416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.180557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.180586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.180789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.180814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.180959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.180987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.181171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.181199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.181345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.181371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.181574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.181602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.181754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.181782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.181987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.182013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.182145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.182187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.182396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.182424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.182609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.182636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.182816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.182845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.183051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.183077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.183215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.183240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.183404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.183446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.183590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.183618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.183801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.183832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.183990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.184015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.184199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.588 [2024-07-15 16:08:35.184227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.588 qpair failed and we were unable to recover it. 00:27:08.588 [2024-07-15 16:08:35.184404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.184430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.184608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.184636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.184775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.184802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.185010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.185036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.185220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.185248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.185391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.185420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.185638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.185663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.185871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.185906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.186082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.186110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.186285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.186310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.186484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.186512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.186666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.186694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.186854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.186892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.187051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.187076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.187258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.187286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.187486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.187511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.187666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.187696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.187863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.187899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.188077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.188102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.188286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.188314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.188481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.188508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.188715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.188740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.188892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.188922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.189100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.189128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.189310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.189336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.189517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.189545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.189744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.189772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.189955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.189981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.190132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.190159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.190326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.190354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.190530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.190555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.190740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.190768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.190915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.190944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.191091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.191117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.191271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.191314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.589 [2024-07-15 16:08:35.191456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.589 [2024-07-15 16:08:35.191484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.589 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.191693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.191718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.191904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.191951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.192093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.192119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.192251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.192277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.192432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.192457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.192639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.192667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.192870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.192903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.193124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.193149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.193336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.193361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.193594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.193619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.193781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.193806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.194016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.194045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.194192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.194217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.194395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.194423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.194596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.194625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.194809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.194835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.195045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.195074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.195217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.195245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.195400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.195426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.195632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.195660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.195808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.195835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.196053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.196079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.196256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.196284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.196430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.196458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.196642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.196667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.196874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.196909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.197092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.197120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.197272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.197297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.197506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.197534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.197736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.197764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.197945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.197970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.198178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.590 [2024-07-15 16:08:35.198206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.590 qpair failed and we were unable to recover it. 00:27:08.590 [2024-07-15 16:08:35.198360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.198388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.198566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.198590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.198746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.198789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.199001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.199027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.199157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.199183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.199363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.199392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.199596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.199624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.199796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.199820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.200035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.200063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.200238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.200270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.200479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.200505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.200710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.200738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.200912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.200941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.201083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.201108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.201262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.201303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.201481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.201509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.201688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.201713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.201918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.201947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.202095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.202122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.202300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.202325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.202530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.202558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.202759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.202784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.202941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.202967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.203156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.203184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.203327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.203354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.203534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.203559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.203763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.203791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.203937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.203965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.204142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.204168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.204313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.204341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.204518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.204547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.204725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.204752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.204928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.204958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.205133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.205161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.205346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.205371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.205551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.205579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.205787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.205815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.205964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.205990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.206145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.206186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.591 [2024-07-15 16:08:35.206322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.591 [2024-07-15 16:08:35.206350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.591 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.206552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.206577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.206717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.206742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.206904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.206931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.207059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.207085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.207217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.207243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.207411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.207436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.207589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.207615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.207820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.207848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.208033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.208062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.208235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.208265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.208409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.208437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.208607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.208635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.208842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.208867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.209053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.209081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.209249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.209277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.209433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.209458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.209593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.209634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.209836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.209864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.210048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.210074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.210251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.210280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.210428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.210456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.210627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.210652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.210826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.210854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.211039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.211068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.211244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.211269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.211439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.211467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.211623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.211651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.211803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.211829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.212010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.212039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.212207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.212235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.212390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.212415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.212553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.212594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.212820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.212848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.213029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.213055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.213258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.213286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.213427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.213455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.213613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.213639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.213824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.213850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.214048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.592 [2024-07-15 16:08:35.214075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.592 qpair failed and we were unable to recover it. 00:27:08.592 [2024-07-15 16:08:35.214263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.214288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.214468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.214496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.214633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.214660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.214817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.214842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.214984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.215010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.215139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.215165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.215296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.215322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.215493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.215521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.215724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.215752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.215933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.215959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.216085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.216114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.216303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.216329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.216494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.216520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.216660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.216687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.216865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.216900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.217103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.217129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.217281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.217309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.217459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.217487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.217664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.217689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.217894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.217923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.218096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.218124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.218282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.218307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.218468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.218494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.218668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.218696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.218887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.218912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.219074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.219099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.219245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.219270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.219427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.219452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.219600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.219628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.219772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.219801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.220008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.220033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.220212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.220240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.220412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.220441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.220619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.220645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.220826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.220854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.221019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.221044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.221197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.221222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.221382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.221407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.221568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.221593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.593 [2024-07-15 16:08:35.221781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.593 [2024-07-15 16:08:35.221806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.593 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.221985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.222014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.222160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.222189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.222369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.222394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.222543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.222570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.222745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.222774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.222946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.222972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.223107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.223150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.223312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.223340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.223518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.223543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.223721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.223748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.223951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.223986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.224170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.224195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.224347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.224374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.224573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.224601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.224752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.224777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.224969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.224997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.225189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.225214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.225378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.225403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.225540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.225564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.225705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.225733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.225913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.225939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.226091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.226123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.226305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.226330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.226487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.226512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.226653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.226679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.226813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.226838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.227029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.227054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.227242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.227270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.227409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.227438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.227642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.227667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.227845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.227873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.228092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.228120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.228329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.594 [2024-07-15 16:08:35.228354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.594 qpair failed and we were unable to recover it. 00:27:08.594 [2024-07-15 16:08:35.228534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.228563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.228715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.228741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.228908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.228934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.229122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.229147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.229288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.229315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.229502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.229528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.229693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.229718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.229882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.229908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.230049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.230074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.230230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.230255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.230411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.230436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.230622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.230647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.230828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.230853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.231024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.231051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.231213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.231239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.231421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.231446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.231581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.231606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.231792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.231821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.231995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.232021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.232152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.232179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.232364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.232390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.232514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.232540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.232702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.232727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.232901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.232927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.233059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.233084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.233222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.233249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.233414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.233440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.233594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.233619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.233781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.233806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.233967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.233993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.234121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.234146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.234316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.234342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.234535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.234560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.234684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.234709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.234897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.234923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.235081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.235106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.235236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.235262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.235428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.235453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.235621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.235645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.235807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.235833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.595 [2024-07-15 16:08:35.236002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.595 [2024-07-15 16:08:35.236028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.595 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.236189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.236214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.236351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.236378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.236512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.236537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.236688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.236728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.236896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.236925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.237092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.237117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.237250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.237276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.237436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.237462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.237618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.237660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.237818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.237861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.238031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.238058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.238189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.238215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.238395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.238423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.238612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.238654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.238834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.238860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.239029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.239055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.239236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.239285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.239470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.239513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.239698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.239742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.239931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.239957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.240135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.240178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.240372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.240401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.240564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.240592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.240770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.240797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.241003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.241048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.241208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.241250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.241399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.241428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.241625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.241668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.241798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.241825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.242043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.242088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.242255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.242298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.242486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.242529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.242693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.242718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.242905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.242931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.243075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.243119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.243308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.243351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.243569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.243612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.243804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.243830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.244005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.244032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.244186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.244216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.596 qpair failed and we were unable to recover it. 00:27:08.596 [2024-07-15 16:08:35.244426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.596 [2024-07-15 16:08:35.244453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.244614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.244640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.244795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.244821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.245034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.245079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.245271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.245300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.245490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.245534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.245690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.245716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.245882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.245908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.246086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.246130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.246318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.246361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.246550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.246595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.246753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.246778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.246959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.247004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.247185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.247232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.247416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.247459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.247638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.247682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.247840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.247870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.248065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.248108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.248314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.248358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.248536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.248579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.248729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.248755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.248959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.249005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.249183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.249225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.249413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.249456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.249645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.249671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.249860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.249890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.250049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.250092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.250304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.250347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.250499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.250542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.250679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.250705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.250865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.250903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.251086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.251129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.251313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.251357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.251539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.251583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.251721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.251746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.251954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.251998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.252153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.252197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.252380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.252408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.252633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.252677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.252848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.597 [2024-07-15 16:08:35.252873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.597 qpair failed and we were unable to recover it. 00:27:08.597 [2024-07-15 16:08:35.253044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.253087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.253288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.253315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.253477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.253521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.253720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.253759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.253896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.253923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.254063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.254088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.254257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.254285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.254457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.254485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.254637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.254665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.254821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.254848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.254986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.255012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.255170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.255213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.255492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.255542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.255793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.255845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.256047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.256073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.256233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.256263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.256420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.256447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.256629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.256657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.256832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.256857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.257043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.257068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.257233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.257261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.257457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.257486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.257655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.257683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.257835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.257864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.258102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.258142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.258331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.258375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.258588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.258631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.258790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.258816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.258977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.259004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.259190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.259234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.259567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.259632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.259787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.259815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.259993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.260019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.260196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.260225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.598 [2024-07-15 16:08:35.260402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.598 [2024-07-15 16:08:35.260432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.598 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.260637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.260665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.260882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.260910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.261075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.261100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.261240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.261267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.261562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.261617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.261771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.261796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.261931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.261958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.262177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.262220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.262417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.262465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.262680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.262723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.262922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.262949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.263129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.263173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.263381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.263424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.263634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.263678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.263867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.263899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.264046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.264071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.264241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.264284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.264449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.264493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.264682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.264725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.264934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.264974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.265118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.265160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.265332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.265360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.265591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.265643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.265845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.265873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.266035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.266061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.266280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.266332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.266533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.266561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.266759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.266788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.266975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.267003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.267164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.267190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.267367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.267410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.267574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.267617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.267757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.267782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.267946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.267973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.268129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.268172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.268384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.268433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.268646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.268689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.268852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.268882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.269012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.269038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.269197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.269240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.599 qpair failed and we were unable to recover it. 00:27:08.599 [2024-07-15 16:08:35.269448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.599 [2024-07-15 16:08:35.269491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.269654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.269696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.269855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.269892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.270080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.270106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.270249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.270293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.270491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.270517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.270654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.270693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.270866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.270906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.271090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.271116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.271278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.271306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.271451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.271479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.271644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.271672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.271829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.271856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.271997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.272023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.272233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.272261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.272420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.272447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.272622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.272650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.272820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.272848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.273027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.273053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.273235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.273264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.273435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.273463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.273701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.273729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.273904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.273951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.274087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.274113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.274274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.274300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.274480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.274509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.274678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.274706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.274850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.274875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.275009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.275034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.275216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.275244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.275384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.275410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.275584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.275613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.275758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.275786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.275948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.275974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.276136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.276161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.276290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.276333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.276529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.276570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.276771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.276798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.276983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.277009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.277138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.277164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.277335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.277363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.600 [2024-07-15 16:08:35.277570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.600 [2024-07-15 16:08:35.277598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.600 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.277794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.277822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.278018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.278044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.278233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.278261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.278536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.278564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.278710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.278738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.278896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.278939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.279113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.279138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.279283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.279315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.279499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.279528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.279728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.279756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.279942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.279967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.280105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.280130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.280330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.280356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.280533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.280561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.280714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.280742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.280926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.280952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.281112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.281137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.281335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.281362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.281668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.281720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.281892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.281934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.282067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.282092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.282259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.282284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.282480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.282508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.282692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.282720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.282965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.282990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.283155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.283180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.283342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.283367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.283539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.283566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.283743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.283771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.283938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.283964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.284123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.284148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.284328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.284355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.284549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.284576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.284767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.284795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.284980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.285005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.285133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.285158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.285352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.285377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.285585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.285613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.285795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.285823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.286004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.286030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.286204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.601 [2024-07-15 16:08:35.286232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.601 qpair failed and we were unable to recover it. 00:27:08.601 [2024-07-15 16:08:35.286415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.286440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.286642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.286670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.286868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.286920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.287075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.287100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.287253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.287278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.287485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.287513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.287713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.287741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.287890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.287916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.288113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.288141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.288286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.288314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.288498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.288523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.288669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.288697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.288847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.288875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.289082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.289107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.289284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.289312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.289512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.289540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.289697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.289722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.289840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.289866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.290012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.290038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.290173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.290198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.290324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.290365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.290550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.290578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.290758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.290783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.290965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.290993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.291134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.291162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.291338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.291363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.291495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.291538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.291717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.291745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.291895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.291921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.292062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.292087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.292215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.292240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.292402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.292427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.292563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.292590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.292803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.292830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.293031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.293061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.293280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.293305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.293469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.293494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.293630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.293655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.293825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.293853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.294006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.294034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.294206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.602 [2024-07-15 16:08:35.294231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.602 qpair failed and we were unable to recover it. 00:27:08.602 [2024-07-15 16:08:35.294365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.294408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.294595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.294624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.294785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.294810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.294997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.295022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.295191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.295218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.295394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.295419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.295595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.295623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.295777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.295805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.295962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.295988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.296127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.296152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.296352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.296380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.296536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.296560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.296721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.296746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.296907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.296935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.297120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.297145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.297308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.297333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.297490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.297515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.297651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.297676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.297885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.297913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.298090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.298117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.298299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.298329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.298511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.298539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.298737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.298765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.298943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.298968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.299094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.299135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.299316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.299344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.299545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.299571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.299757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.299785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.299991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.300020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.300175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.300199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.300323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.300362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.300560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.300588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.300793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.300818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.300960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.300988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.301164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.301191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.603 [2024-07-15 16:08:35.301348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.603 [2024-07-15 16:08:35.301373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.603 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.301561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.301586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.301736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.301763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.301947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.301971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.302154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.302182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.302327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.302355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.302538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.302564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.302743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.302770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.302933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.302958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.303081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.303106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.303232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.303272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.303446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.303474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.303624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.303652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.303859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.303892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.304068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.304095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.304268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.304292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.304470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.304498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.304646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.304674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.304851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.304880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.305028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.305055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.305208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.305237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.305397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.305422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.305577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.305601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.305794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.305819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.305977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.306003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.306179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.306207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.306390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.306417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.306593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.306618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.306794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.306822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.306966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.306995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.307143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.307168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.307331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.307356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.307515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.307540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.307665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.307689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.307858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.307891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.308045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.308073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.308247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.308271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.308394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.308434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.308612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.308639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.308851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.308882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.309062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.309090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.309266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.604 [2024-07-15 16:08:35.309294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.604 qpair failed and we were unable to recover it. 00:27:08.604 [2024-07-15 16:08:35.309472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.309496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.309674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.309701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.309909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.309934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.310091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.310117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.310290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.310318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.310464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.310491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.310669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.310694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.310881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.310909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.311063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.311090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.311268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.311293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.311467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.311495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.311677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.311704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.311890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.311916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.312133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.312162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.312334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.312362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.312540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.312566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.312716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.312743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.312929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.312957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.313104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.313129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.313324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.313351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.313539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.313566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.313743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.313767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.313940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.313967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.314179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.314204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.314331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.314356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.314521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.314545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.314746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.314773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.314986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.315012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.315190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.315217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.315357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.315384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.315586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.315611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.315752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.315776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.315965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.315991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.316195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.316219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.316405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.316433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.316582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.316609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.316763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.316788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.316966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.316995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.317183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.317212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.317343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.605 [2024-07-15 16:08:35.317369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.605 qpair failed and we were unable to recover it. 00:27:08.605 [2024-07-15 16:08:35.317576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.317603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.317749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.317777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.317957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.317983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.318115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.318156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.318311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.318339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.318520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.318545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.318678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.318703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.318865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.318914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.319068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.319092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.319246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.319291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.319491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.319519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.319693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.319718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.319926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.319953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.320101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.320128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.320307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.320332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.320503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.320531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.320704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.320731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.320892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.320918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.321092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.321119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.321304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.321331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.321535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.321559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.321734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.321761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.321935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.321964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.322171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.322196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.322340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.322367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.322522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.322553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.322735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.322760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.322937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.322966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.323132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.323160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.323368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.323393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.323564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.323591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.323789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.323817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.323999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.324025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.324183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.324225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.324399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.324427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.324632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.324658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.324810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.324838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.325028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.325053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.325238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.325263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.325401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.325426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.325563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.606 [2024-07-15 16:08:35.325588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.606 qpair failed and we were unable to recover it. 00:27:08.606 [2024-07-15 16:08:35.325774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.325799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.325977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.326006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.326185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.326213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.326400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.326425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.326608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.326633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.326807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.326835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.326996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.327022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.327173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.327215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.327370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.327398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.327556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.327582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.327758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.327786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.327937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.327963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.328153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.328179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.328360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.328388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.328590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.328617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.328770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.328795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.328999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.329028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.329198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.329226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.329404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.329429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.329610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.329638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.329819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.329846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.330038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.330064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.330208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.330238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.330409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.330438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.330636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.330661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.330861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.330907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.331055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.331083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.331251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.331276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.331413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.607 [2024-07-15 16:08:35.331438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.607 qpair failed and we were unable to recover it. 00:27:08.607 [2024-07-15 16:08:35.331600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.331625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.331784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.331809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.332014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.332043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.332242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.332270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.332452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.332477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.332682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.332710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.332855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.332887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.333046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.333071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.333208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.333233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.333390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.333416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.333605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.333630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.333775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.333803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.333991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.334016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.334149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.334174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.334380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.334408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.334577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.334604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.334759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.334787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.334994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.335020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.335220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.335248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.335405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.335430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.335582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.335607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.335757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.335799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.335982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.336008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.336223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.336255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.336467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.336496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.336679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.336703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.336910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.336937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.337110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.337138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.337289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.337314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.337475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.337519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.337698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.337725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.337938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.337964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.338148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.608 [2024-07-15 16:08:35.338176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.608 qpair failed and we were unable to recover it. 00:27:08.608 [2024-07-15 16:08:35.338349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.338377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.338555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.338581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.338731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.338758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.338923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.338951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.339122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.339147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.339299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.339324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.339498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.339525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.339707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.339733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.339897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.339923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.340081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.340105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.340233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.340258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.340434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.340462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.340638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.340666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.340837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.340865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.341049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.341074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.341248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.341275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.341460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.341485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.341622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.341650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.341830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.341857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.342042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.342067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.342248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.342276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.342473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.342500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.342685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.342710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.342895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.342940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.343079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.343105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.343262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.343287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.343464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.343491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.343657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.343684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.343863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.343894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.344045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.344072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.344249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.344275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.344433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.344457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.344661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.344689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.344863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.344897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.609 [2024-07-15 16:08:35.345082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.609 [2024-07-15 16:08:35.345108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.609 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.345258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.345287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.345430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.345457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.345636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.345661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.345809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.345836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.345996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.346023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.346205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.346231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.346392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.346418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.346549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.346589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.346794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.346819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.346961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.346991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.347150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.347175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.347368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.347393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.347543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.347570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.347753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.347778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.347965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.347990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.348198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.348225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.348374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.348401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.348573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.348597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.348752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.348780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.348951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.348979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.349182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.349207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.349382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.349409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.349617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.349645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.349839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.349864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.350027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.350054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.350222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.350251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.350460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.350484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.350663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.350690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.350865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.350901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.351049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.351074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.351276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.351304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.351472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.351499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.351707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.351731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.351907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.351936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.352092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.352119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.610 qpair failed and we were unable to recover it. 00:27:08.610 [2024-07-15 16:08:35.352276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.610 [2024-07-15 16:08:35.352302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.352459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.352502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.352649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.352677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.352859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.352889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.353076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.353104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.353305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.353332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.353510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.353534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.353708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.353736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.353914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.353943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.354100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.354125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.354291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.354318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.354487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.354514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.354689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.354716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.354900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.354942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.355081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.355105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.355309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.355349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.355547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.355591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.355780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.355824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.355994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.356021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.356218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.356243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.356428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.356471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.356798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.356850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.357027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.357053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.357207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.357249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.357447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.357500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.357717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.357760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.357924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.357950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.358132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.358177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.358353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.358401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.358562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.358606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.358789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.358814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.359000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.359043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.359236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.359280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.359578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.359629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.359794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.359820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.359958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.359985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.360147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.360174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.360347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.611 [2024-07-15 16:08:35.360375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.611 qpair failed and we were unable to recover it. 00:27:08.611 [2024-07-15 16:08:35.360547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.360575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.360761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.360789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.360928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.360954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.361165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.361194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.361457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.361499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.361683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.361726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.361997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.362041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.362205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.362249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.362456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.362499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.362658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.362683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.362866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.362901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.363119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.363163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.363354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.363382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.363690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.363744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.363930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.363956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.364143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.364186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.364368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.364411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.364628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.364674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.364842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.364869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.365049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.365092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.365243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.365287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.365468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.365511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.365695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.365721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.365889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.365915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.366074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.366117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.366304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.366346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.366500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.366543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.366707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.366732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.366957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.367001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.367178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.367225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.367413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.367441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.367599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.612 [2024-07-15 16:08:35.367625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.612 qpair failed and we were unable to recover it. 00:27:08.612 [2024-07-15 16:08:35.367763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.367790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.368000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.368045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.368195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.368237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.368446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.368489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.368649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.368676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.368831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.368856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.369030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.369073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.369232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.369276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.369453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.369496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.369628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.369653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.369838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.369864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.370054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.370096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.370287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.370317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.370463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.370491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.370657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.370684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.370828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.370855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.371020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.371046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.371278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.371328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.371482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.371509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.371707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.371734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.371893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.371918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.372082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.372107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.372292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.372320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.372513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.372563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.372739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.372766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.372930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.372956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.373094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.373120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.373258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.373282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.373437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.373463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.373622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.373651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.373849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.373882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.374038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.374063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.374193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.374218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.374419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.374447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.374646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.374673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.374856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.374886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.375026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.375050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.613 [2024-07-15 16:08:35.375235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.613 [2024-07-15 16:08:35.375264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.613 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.375443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.375471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.375652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.375680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.375858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.375894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.376082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.376106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.376292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.376316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.376500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.376527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.376676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.376703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.376888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.376939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.377110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.377137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.377336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.377376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.377553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.377580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.377779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.377806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.377959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.377985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.378148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.378189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.378369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.378393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.378807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.378865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.379032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.379058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.379216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.379244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.379554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.379598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.379760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.379785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.379947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.379972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.380129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.380153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.380361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.380389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.380562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.380590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.380763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.380791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.380971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.380997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.381132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.381172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.381380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.381405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.381583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.381616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.381788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.381816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.381970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.381995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.382177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.382205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.382345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.382373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.382605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.382657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.382866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.382921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.383085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.383111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.383249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.383274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.383453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.383481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.383669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.383697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.383844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.614 [2024-07-15 16:08:35.383869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.614 qpair failed and we were unable to recover it. 00:27:08.614 [2024-07-15 16:08:35.384030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.384072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.384260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.384288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.384504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.384529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.384724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.384752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.384925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.384953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.385136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.385161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.385376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.385403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.385582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.385609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.385755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.385779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.385983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.386011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.386161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.386191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.386366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.386392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.386570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.386598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.386809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.386836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.387034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.387060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.387225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.387254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.387426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.387453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.387610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.387635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.387771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.387816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.387989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.388018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.388225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.388249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.388427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.388455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.388655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.388683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.388836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.388861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.389029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.389054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.389278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.389306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.389492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.389516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.389663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.389691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.389867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.389902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.390090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.390116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.390285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.390313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.390482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.390510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.390691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.390716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.390872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.390932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.391100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.391128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.615 [2024-07-15 16:08:35.391308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.615 [2024-07-15 16:08:35.391332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.615 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.391513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.391541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.391710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.391738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.391889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.391914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.392075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.392102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.392313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.392341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.392542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.392567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.392751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.392786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.392930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.392956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.393115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.393140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.393317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.393345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.393546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.393574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.393735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.393760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.393892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.393935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.394122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.394147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.394306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.394331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.394503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.394530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.394701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.394728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.394887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.394913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.395069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.395094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.395275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.395303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.395483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.395508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.395678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.395705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.395840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.395868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.396053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.396078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.396244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.396269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.396397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.396424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.396609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.396634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.396776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.396804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.396980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.397009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.397198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.397223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.397366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.397393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.397567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.397594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.397735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.397760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.397896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.397923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.398131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.398156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.398320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.398346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.398519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.398546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.398730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.398755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.398905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.398931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.616 qpair failed and we were unable to recover it. 00:27:08.616 [2024-07-15 16:08:35.399138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.616 [2024-07-15 16:08:35.399166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.399338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.399367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.399541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.399566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.399747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.399775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.399974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.400002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.400150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.400175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.400355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.400383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.400560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.400588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.400800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.400825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.400963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.400989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.401142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.401187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.401341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.401366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.401553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.401581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.401764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.401792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.401971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.401997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.402151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.402176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.402362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.402389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.402558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.402583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.402746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.402787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.402947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.402972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.403160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.403184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.403314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.403339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.403471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.403497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.403654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.403679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.403869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.403902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.404083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.404108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.404292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.404317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.404488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.404516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.404656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.617 [2024-07-15 16:08:35.404683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.617 qpair failed and we were unable to recover it. 00:27:08.617 [2024-07-15 16:08:35.404865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.404896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.405070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.405098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.405298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.405326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.405478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.405503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.405656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.405698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.405882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.405910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.406065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.406093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.406226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.406268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.406436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.406464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.406668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.406693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.406836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.406864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.407017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.407045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.407197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.407222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.407388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.407414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.407570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.407595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.407782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.407807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.408013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.408041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.408184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.408211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.408396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.408421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.408580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.408605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.408785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.408813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.408991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.409017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.409170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.409212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.409415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.409442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.409596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.409621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.409786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.409827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.410032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.410060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.410214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.410239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.410419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.410447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.410606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.410631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.410843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.410871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.411068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.411093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.411284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.411312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.411492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.411520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.411734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.411762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.411942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.411968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.412100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.412125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.412331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.412358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.412512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.412540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.412693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.412718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.412841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.618 [2024-07-15 16:08:35.412887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.618 qpair failed and we were unable to recover it. 00:27:08.618 [2024-07-15 16:08:35.413040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.413068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.413276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.413301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.413478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.413505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.413691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.413716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.413901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.413927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.414079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.414106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.414294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.414323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.414507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.414531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.414732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.414760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.414967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.414995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.415168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.415193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.415319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.415360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.415539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.415567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.415768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.415793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.415936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.415965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.416144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.416172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.416346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.416371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.416535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.416560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.416724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.416749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.416872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.416902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.417105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.417133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.417311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.417339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.417542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.417567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.417743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.417771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.417946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.417975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.418133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.418158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.418337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.418365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.418543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.418570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.418745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.418773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.418965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.418990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.419144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.419186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.419339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.419364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.419529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.419554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.619 qpair failed and we were unable to recover it. 00:27:08.619 [2024-07-15 16:08:35.419712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.619 [2024-07-15 16:08:35.419740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.419889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.419914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.420047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.420072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.420208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.420233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.420431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.420456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.420635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.420664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.420863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.420897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.421049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.421073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.421203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.421242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.421376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.421403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.421581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.421606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.421751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.421779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.421922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.421950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.422134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.422159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.422321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.422346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.422513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.422541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.422720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.422745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.422871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.422919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.423090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.423117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.423326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.423351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.423536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.423563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.423739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.423767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.423945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.423970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.424101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.424144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.424321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.424349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.424522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.424547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.424683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.424708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.424849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.424882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.425006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.425031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.425234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.425261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.425432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.425460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.425637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.425662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.425837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.425864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.426008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.426036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.620 [2024-07-15 16:08:35.426189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.620 [2024-07-15 16:08:35.426214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.620 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.426344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.426385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.426573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.426598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.426720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.426745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.426922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.426950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.427139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.427164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.427294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.427319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.427457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.427499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.427683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.427711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.427913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.427954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.428117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.428142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.428333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.428361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.428509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.428534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.428690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.428715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.428870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.428900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.429102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.429129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.429284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.429311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.429449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.429477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.429628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.429653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.429784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.429809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.429969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.430003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.430159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.430184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.430355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.430383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.430557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.430585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.430742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.430767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.430921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.430962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.431162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.431190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.431354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.431379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.431536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.431560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.431766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.431794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.432000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.432026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.432212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.432239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.432414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.432442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.432596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.432621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.432842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.432869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.433040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.433067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.433248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.621 [2024-07-15 16:08:35.433273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.621 qpair failed and we were unable to recover it. 00:27:08.621 [2024-07-15 16:08:35.433410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.433453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.433591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.433619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.433803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.433827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.433991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.434017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.434177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.434202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.434389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.434414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.434571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.434595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.434797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.434826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.435004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.435030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.435205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.435233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.435412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.435444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.435626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.435651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.435812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.435837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.436004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.436030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.436194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.436219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.436396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.436424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.436599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.436626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.436798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.436823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.436980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.437005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.437142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.437183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.437365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.437390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.437604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.437632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.437830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.437857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.438039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.438065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.438223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.438250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.438419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.438447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.438605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.438631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.438807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.438834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.439042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.439067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.439195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.439220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.439380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.439405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.439565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.439589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.439742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.439767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.439894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.622 [2024-07-15 16:08:35.439919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.622 qpair failed and we were unable to recover it. 00:27:08.622 [2024-07-15 16:08:35.440103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.440128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.440293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.440318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.440501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.440528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.440698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.440725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.440910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.440936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.441086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.441111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.441281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.441309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.441462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.441486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.441671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.441696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.441863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.441896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.442071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.442096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.442249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.442277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.442483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.442511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.442684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.442709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.442912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.442941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.443142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.443169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.443320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.443345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.443520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.443548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.443726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.443754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.443961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.443987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.444147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.444188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.444328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.444356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.444540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.444566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.444741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.444768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.444939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.444968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.445111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.445136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.445339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.445367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.445537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.445564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.445716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.445740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.445947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.445975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.446147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.446175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.446382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.446407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.446598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.446625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.446762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.446790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.446973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.446999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.447177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.447206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-07-15 16:08:35.447356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.623 [2024-07-15 16:08:35.447384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.447565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.447590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.447767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.447795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.447961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.447987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.448107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.448132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.448306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.448333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.448508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.448536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.448740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.448765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.448943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.448976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.449160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.449186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.449337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.449362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.449547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.449575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.449753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.449780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.449957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.449983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.450156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.450184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.450320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.450348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.450509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.450534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.450696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.450721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.450904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.450932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.451085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.451110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.451265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.451290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.451469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.451497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.451709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.451734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.451887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.451916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.452092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.452120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.452302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.452327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.452475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.452500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.452652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.452680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.452860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.452890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.453053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.453077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.453282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.453309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.453487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.453512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.453684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.453711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.453886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.453915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.454088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.454113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.454317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.454349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.454562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.454590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.454757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.454782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.454963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.454991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.455139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.455167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.455342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.455367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-07-15 16:08:35.455547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.624 [2024-07-15 16:08:35.455575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.455738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.455765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.455957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.455982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.456164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.456192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.456371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.456395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.456548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.456573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.456777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.456805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.456992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.457017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.457208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.457233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.457382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.457410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.457580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.457608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.457757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.457782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.457906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.457931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.458117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.458145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.458327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.458352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.458536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.458564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.458735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.458763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.458917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.458942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.459072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.459098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.459282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.459310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.459453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.459478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.459631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.459671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.459833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.459861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.460039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.460064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.460267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.460295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.460497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.460525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.460741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.460766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.460959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.460987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.461177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.461201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.461322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.461347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.461482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.461507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.461690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.461718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.461865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.461895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.462066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.462093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.462244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.462272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.462427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.462452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.462580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.462607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.462816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.462844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.463047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.463073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.463247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.463274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.463440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.463468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.463646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.463671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.463834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.463859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.464026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.625 [2024-07-15 16:08:35.464052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-07-15 16:08:35.464233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.464258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.464404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.464432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.464614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.464642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.464824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.464848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.465012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.465038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.465217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.465246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.465396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.465420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.465548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.465573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.465773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.465801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.465950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.465975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.466108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.466149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.466328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.466358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.466545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.466571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.466714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.466742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.466932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.466957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.467113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.467138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.467344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.467372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.467506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.467534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.467711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.467739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.467890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.467918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.468130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.468158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.468305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.468330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.468455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.468481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.468616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.468641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.468799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.468824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.468977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.469002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.469127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.469152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.469307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.469332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.469506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.469534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.469696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.469724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.469898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.469924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.470059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.470084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.470242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.470284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.470440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.470465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.470625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.470666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.470852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.626 [2024-07-15 16:08:35.470895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.626 qpair failed and we were unable to recover it. 00:27:08.626 [2024-07-15 16:08:35.471077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.471102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.471282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.471309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.471516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.471544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.471695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.471720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.471886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.471929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.472071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.472099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.472275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.472300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.472432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.472473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.472661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.472689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.472900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.472931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.473094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.473119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.473332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.473360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.473536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.473561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.473742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.473770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.473939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.473967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.474121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.474146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.474321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.474348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.474562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.474587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.474751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.474778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.474963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.474989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.475152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.475195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.475375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.475400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.475518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.475559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.475703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.475731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.475910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.475935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.476114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.476142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.476356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.476381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.476541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.476566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.476773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.476801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.476948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.476976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.477185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.477210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.477349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.477377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.477544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.477572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.477732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.477757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.477881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.477925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.478101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.478127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.478257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.478286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.478471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.478496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.478648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.478676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.478863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.478905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.479081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.479110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.479248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.479276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.627 qpair failed and we were unable to recover it. 00:27:08.627 [2024-07-15 16:08:35.479458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.627 [2024-07-15 16:08:35.479483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.479617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.479642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.479824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.479850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.480042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.480067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.480245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.480273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.480450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.480477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.480682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.480707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.480870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.480901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.481044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.481070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.481234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.481259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.481434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.481462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.481604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.481632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.481793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.481818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.481959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.481984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.482181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.482209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.482384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.482408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.482625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.482653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.482805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.482834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.483018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.483044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.483223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.483251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.483421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.483449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.483622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.483647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.483824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.483852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.484015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.484040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.484175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.484200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.484372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.484399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.484549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.484577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.484725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.484750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.484918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.484947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.485097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.485125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.485279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.485305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.485493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.485517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.485663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.485690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.485850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.485875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.486006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.486048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.486192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.486225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.486405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.486430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.486549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.486589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.486791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.486818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.487001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.487027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.487232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.487260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.487431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.487459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.487650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.487675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.487882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.628 [2024-07-15 16:08:35.487910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.628 qpair failed and we were unable to recover it. 00:27:08.628 [2024-07-15 16:08:35.488075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.629 [2024-07-15 16:08:35.488103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.629 qpair failed and we were unable to recover it. 00:27:08.629 [2024-07-15 16:08:35.488288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.629 [2024-07-15 16:08:35.488313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.629 qpair failed and we were unable to recover it. 00:27:08.629 [2024-07-15 16:08:35.488458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.629 [2024-07-15 16:08:35.488486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.629 qpair failed and we were unable to recover it. 00:27:08.629 [2024-07-15 16:08:35.488640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.629 [2024-07-15 16:08:35.488668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.629 qpair failed and we were unable to recover it. 00:27:08.629 [2024-07-15 16:08:35.488817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.629 [2024-07-15 16:08:35.488842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.629 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.489048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.489074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.489250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.489279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.489483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.489508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.489658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.489685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.489828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.489856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.490020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.490046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.490181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.490205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.490384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.490412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.490590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.490616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.490764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.490792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.490939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.490968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.491135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.491160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.491291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.491315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.491447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.491482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.491648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.491672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.491824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.491851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.492036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.492061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.492202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.492226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.492359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.492403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.492576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.910 [2024-07-15 16:08:35.492604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.910 qpair failed and we were unable to recover it. 00:27:08.910 [2024-07-15 16:08:35.492812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.492837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.492971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.492997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.493126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.493169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.493335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.493360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.493492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.493517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.493670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.493697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.493847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.493871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.494016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.494041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.494194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.494219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.494378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.494403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.494555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.494583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.494728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.494756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.494911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.494937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.495093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.495118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.495266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.495294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.495482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.495507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.495667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.495692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.495811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.495836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.496002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.496028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.496167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.496209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.496357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.496389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.496547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.496572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.496748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.496775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.496927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.496955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.497138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.497163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.497333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.497361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.497560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.497588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.497736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.497761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.497896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.497938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.498148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.498176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.498358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.498382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.498537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.498565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.498701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.498728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.498918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.498944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.499124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.499152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.499293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.499320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.499492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.499517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.499686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.911 [2024-07-15 16:08:35.499714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.911 qpair failed and we were unable to recover it. 00:27:08.911 [2024-07-15 16:08:35.499900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.499926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.500067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.500092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.500275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.500305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.500490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.500515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.500650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.500692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.500894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.500936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.501097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.501122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.501281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.501306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.501477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.501502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.501639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.501664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.501799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.501824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.501979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.502005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.502164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.502205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.502416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.502441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.502616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.502643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.502784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.502812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.502992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.503017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.503151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.503193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.503375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.503403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.503580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.503604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.503736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.503780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.503954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.503982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.504192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.504217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.504351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.504376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.504532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.504557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.504710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.504735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.504898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.504926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.505128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.505156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.505308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.505333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.505507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.505535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.505719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.505743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.505928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.505954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.506103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.506130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.506302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.506329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.506505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.506530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.506708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.506736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.506914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.912 [2024-07-15 16:08:35.506943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.912 qpair failed and we were unable to recover it. 00:27:08.912 [2024-07-15 16:08:35.507106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.507131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.507288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.507328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.507496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.507523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.507708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.507733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.507890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.507915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.508076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.508104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.508283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.508308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.508483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.508510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.508677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.508704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.508894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.508937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.509095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.509120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.509310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.509338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.509515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.509540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.509689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.509721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.509884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.509912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.510096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.510121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.510326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.510354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.510518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.510546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.510695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.510720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.510925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.510953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.511130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.511155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.511312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.511337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.511477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.511505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.511662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.511690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.511833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.511858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.511997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.512043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.512198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.512225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.512439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.512465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.512605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.512633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.512769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.512796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.512971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.512997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.513182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.513209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.513382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.513410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.513616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.513642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.513776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.513801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.513975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.514001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.514153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.514178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.514332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.514356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.514514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.514539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.514700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.514724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.514899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.913 [2024-07-15 16:08:35.514932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.913 qpair failed and we were unable to recover it. 00:27:08.913 [2024-07-15 16:08:35.515071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.515099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.515313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.515337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.515542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.515570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.515740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.515768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.515947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.515972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.516104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.516129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.516258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.516283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.516468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.516493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.516673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.516701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.516840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.516867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.517053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.517077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.517252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.517280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.517449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.517477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.517693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.517719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.517867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.517901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.518050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.518078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.518287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.518312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.518513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.518540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.518678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.518706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.518889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.518915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.519122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.519150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.519286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.519314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.519494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.519519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.519727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.519755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.519908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.519938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.520096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.520121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.520276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.520305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.520521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.520546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.520680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.520706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.520861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.520916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.521105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.521130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.521252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.521277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.521481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.521508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.521664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.521691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.521873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.914 [2024-07-15 16:08:35.521903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.914 qpair failed and we were unable to recover it. 00:27:08.914 [2024-07-15 16:08:35.522035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.522060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.522219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.522259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.522465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.522490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.522639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.522666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.522846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.522874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.523038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.523063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.523222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.523264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.523396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.523424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.523632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.523656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.523809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.523837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.523998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.524023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.524180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.524205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.524369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.524394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.524553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.524578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.524711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.524737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.524899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.524925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.525067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.525093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.525243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.525268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.525477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.525505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.525686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.525715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.525866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.525897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.526033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.526059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.526259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.526287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.526461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.526486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.526687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.526715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.526892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.526921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.527075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.527100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.527241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.527285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.527459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.527487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.527665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.527690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.527865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.527898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.528074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.915 [2024-07-15 16:08:35.528102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.915 qpair failed and we were unable to recover it. 00:27:08.915 [2024-07-15 16:08:35.528286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.528315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.528490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.528518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.528685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.528712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.528912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.528938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.529094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.529122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.529287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.529315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.529497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.529522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.529697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.529724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.529942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.529968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.530134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.530159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.530342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.530370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.530546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.530573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.530754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.530779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.530940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.530982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.531165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.531193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.531369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.531394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.531520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.531562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.531764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.531792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.531969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.531994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.532180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.532208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.532358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.532386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.532592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.532617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.532739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.532764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.532978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.533007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.533185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.533210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.533386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.533413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.533593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.533621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.533775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.533804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.533930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.533955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.534151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.534176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.534338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.534363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.534496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.534521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.534732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.534760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.534963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.534989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.535133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.535161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.535310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.535337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.916 qpair failed and we were unable to recover it. 00:27:08.916 [2024-07-15 16:08:35.535512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.916 [2024-07-15 16:08:35.535537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.535669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.535693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.535882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.535923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.536077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.536102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.536276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.536305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.536459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.536487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.536665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.536690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.536837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.536864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.537055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.537085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.537265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.537290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.537467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.537495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.537691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.537718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.537898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.537924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.538082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.538106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.538279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.538307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.538481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.538506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.538676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.538704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.538881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.538925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.539061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.539090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.539250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.539276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.539432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.539457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.539640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.539665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.539814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.539842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.540034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.540060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.540217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.540243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.540393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.540420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.540574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.540599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.540782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.540807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.541015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.541043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.541183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.541210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.541362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.541389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.541524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.541566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.541774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.541802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.541982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.542009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.542181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.542209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.542389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.542417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.542619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.542644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.542848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.917 [2024-07-15 16:08:35.542881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.917 qpair failed and we were unable to recover it. 00:27:08.917 [2024-07-15 16:08:35.543028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.543056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.543236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.543261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.543433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.543460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.543632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.543660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.543861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.543891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.544076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.544104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.544271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.544299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.544471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.544495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.544671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.544699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.544896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.544922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.545054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.545079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.545249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.545278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.545449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.545476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.545627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.545652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.545823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.545851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.546062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.546090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.546261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.546286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.546411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.546451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.546620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.546648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.546834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.546860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.547041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.547069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.547254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.547280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.547465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.547490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.547668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.547696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.547897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.547939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.548107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.548132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.548343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.548371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.548511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.548539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.548716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.548741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.548864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.548912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.549081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.549109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.549286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.549311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.549487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.549514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.549656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.549684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.549891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.549916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.550111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.550139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.550316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.550344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.550516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.550541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.918 [2024-07-15 16:08:35.550724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.918 [2024-07-15 16:08:35.550752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.918 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.550922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.550951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.551114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.551140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.551272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.551297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.551480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.551505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.551696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.551722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.551892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.551921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.552093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.552120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.552323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.552348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.552526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.552556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.552730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.552764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.552911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.552936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.553092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.553117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.553281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.553309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.553467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.553492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.553649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.553674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.553834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.553862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.554042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.554067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.554239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.554267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.554411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.554439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.554601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.554626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.554802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.554829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.555003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.555031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.555213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.555238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.555419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.555447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.555624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.555652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.555796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.555821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.555981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.556025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.556195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.556223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.556396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.919 [2024-07-15 16:08:35.556421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.919 qpair failed and we were unable to recover it. 00:27:08.919 [2024-07-15 16:08:35.556624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.556652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.556819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.556847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.557017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.557042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.557232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.557261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.557430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.557458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.557635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.557660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.557837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.557865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.558036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.558068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.558210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.558235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.558355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.558380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.558557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.558584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.558755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.558783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.558993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.559018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.559198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.559226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.559410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.559435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.559642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.559669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.559802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.559830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.559991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.560020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.560200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.560229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.560380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.560411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.560590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.560614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.560796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.560824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.561014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.561040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.561194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.561218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.561376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.561401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.561541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.561566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.561719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.561744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.561868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.561915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.562089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.562117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.562276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.562300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.562464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.562492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.562634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.562661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.562833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.562858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.562994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.563019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.563201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.563232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.563438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.563463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.563642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.563670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.563810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.563838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.564037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.564063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.564204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.564232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.564373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.564400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.920 qpair failed and we were unable to recover it. 00:27:08.920 [2024-07-15 16:08:35.564555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.920 [2024-07-15 16:08:35.564580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.564699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.564724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.564944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.564973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.565189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.565214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.565358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.565386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.565587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.565615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.565772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.565797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.565964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.565990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.566153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.566178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.566311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.566336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.566517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.566546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.566749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.566777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.566950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.566976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.567106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.567131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.567283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.567308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.567442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.567467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.567600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.567626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.567846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.567874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.568068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.568093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.568276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.568304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.568480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.568508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.568693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.568718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.568921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.568950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.569130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.569158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.569331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.569356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.569525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.569552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.569694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.569722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.569932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.569958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.570157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.570184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.570352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.570380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.570527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.570552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.570707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.570749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.570889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.570931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.571062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.571087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.571281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.571309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.571463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.571491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.571678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.571703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.571887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.571916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.921 qpair failed and we were unable to recover it. 00:27:08.921 [2024-07-15 16:08:35.572096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.921 [2024-07-15 16:08:35.572121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.572283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.572308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.572462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.572490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.572666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.572695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.572870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.572905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.573110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.573138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.573309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.573336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.573539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.573564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.573764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.573792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.573939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.573968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.574156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.574181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.574390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.574418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.574605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.574631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.574799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.574824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.575011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.575037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.575212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.575239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.575422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.575447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.575629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.575656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.575803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.575831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.576028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.576054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.576258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.576286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.576455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.576484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.576664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.576689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.576809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.576855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.577039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.577067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.577251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.577276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.577428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.577453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.577616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.577643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.577831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.577856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.578021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.578046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.578220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.578248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.578452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.578477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.578609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.578651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.578829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.578857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.579018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.579044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.579204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.579246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.579449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.579477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.579667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.579692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.579864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.579905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.580057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.580085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.580260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.580285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.922 [2024-07-15 16:08:35.580454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.922 [2024-07-15 16:08:35.580481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.922 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.580640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.580667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.580798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.580823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.580982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.581008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.581169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.581194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.581351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.581376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.581549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.581577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.581719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.581746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.581952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.581977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.582150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.582182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.582332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.582360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.582520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.582544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.582669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.582694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.582845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.582875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.583069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.583094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.583256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.583281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.583489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.583516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.583666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.583691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.583825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.583868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.584066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.584094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.584268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.584293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.584470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.584498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.584676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.584704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.584927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.584955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.585142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.585170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.585346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.585374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.585552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.585577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.585719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.585744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.585905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.585949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.586091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.586117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.586272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.586314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.586456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.586484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.586637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.586662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.586844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.586871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.587030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.587055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.587246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.587271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.587457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.587486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.587697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.587725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.587869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.587899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.588026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.588051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.588210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.588237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.588382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.923 [2024-07-15 16:08:35.588407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.923 qpair failed and we were unable to recover it. 00:27:08.923 [2024-07-15 16:08:35.588534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.588559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.588746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.588773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.588952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.588978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.589152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.589180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.589361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.589386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.589537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.589562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.589739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.589766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.589965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.589993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.590156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.590181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.590312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.590353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.590555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.590583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.590737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.590762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.590963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.590991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.591159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.591187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.591341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.591367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.591505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.591530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.591666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.591691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.591852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.591887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.592044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.592087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.592231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.592259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.592467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.592492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.592666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.592694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.592874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.592911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.593088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.593113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.593320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.593348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.593517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.593545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.593729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.593754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.593916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.593942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.594165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.594190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.594341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.594366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.594488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.594531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.594708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.594736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.594913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.594939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.595101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.595126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.595315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.924 [2024-07-15 16:08:35.595343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.924 qpair failed and we were unable to recover it. 00:27:08.924 [2024-07-15 16:08:35.595524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.595555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.595737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.595765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.595913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.595942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.596092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.596117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.596281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.596306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.596488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.596516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.596722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.596747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.596892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.596921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.597102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.597130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.597305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.597330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.597461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.597504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.597651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.597679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.597857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.597889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.598071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.598099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.598275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.598302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.598484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.598510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.598691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.598720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.598888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.598917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.599078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.599104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.599311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.599339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.599494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.599522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.599667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.599692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.599888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.599917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.600095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.600123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.600272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.600297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.600475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.600503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.600669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.600697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.600850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.600886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.601043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.601068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.601226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.601254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.601454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.601479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.601647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.601676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.601827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.601855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.602063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.602088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.602274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.602301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.602431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.602459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.602634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.602659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.602833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.602861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.603027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.603052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.603210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.603236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.603399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.925 [2024-07-15 16:08:35.603424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.925 qpair failed and we were unable to recover it. 00:27:08.925 [2024-07-15 16:08:35.603584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.603612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.603816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.603841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.603984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.604009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.604175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.604200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.604359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.604384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.604559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.604587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.604741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.604768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.604926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.604952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.605164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.605191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.605365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.605392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.605571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.605596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.605740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.605767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.605950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.605978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.606128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.606153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.606360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.606388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.606525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.606555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.606728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.606753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.606882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.606924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.607104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.607132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.607307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.607332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.607513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.607541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.607688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.607716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.607873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.607914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.608048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.608073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.608226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.608266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.608416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.608441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.608643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.608670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.608856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.608891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.609073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.609098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.609275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.609303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.609451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.609479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.609654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.609679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.609842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.609867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.610012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.610037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.610224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.610249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.610463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.610490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.610665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.610693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.610842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.610867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.611039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.611067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.611212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.611240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.611398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.611423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.926 [2024-07-15 16:08:35.611563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.926 [2024-07-15 16:08:35.611588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.926 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.611743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.611783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.611998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.612024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.612199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.612227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.612371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.612399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.612545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.612569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.612698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.612739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.612929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.612955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.613117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.613142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.613318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.613346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.613543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.613570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.613741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.613766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.613943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.613971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.614148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.614180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.614325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.614350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.614509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.614552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.614750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.614777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.614950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.614976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.615150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.615178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.615351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.615379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.615559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.615585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.615715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.615740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.615922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.615948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.616111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.616136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.616316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.616344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.616508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.616536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.616692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.616717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.616855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.616885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.617055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.617083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.617251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.617276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.617454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.617481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.617622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.617650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.617823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.617852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.618012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.618037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.618193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.618222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.618404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.618429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.618591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.618616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.618798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.618826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.618986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.619013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.619149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.619174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.619334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.619363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.619526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.927 [2024-07-15 16:08:35.619551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.927 qpair failed and we were unable to recover it. 00:27:08.927 [2024-07-15 16:08:35.619740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.619767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.619951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.619981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.620134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.620160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.620322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.620348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.620527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.620552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.620714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.620739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.620881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.620907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.621056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.621081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.621263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.621288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.621411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.621436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.621566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.621591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.621713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.621738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.621924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.621953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.622156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.622184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.622361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.622386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.622548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.622576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.622712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.622740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.622901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.622927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.623109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.623137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.623287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.623314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.623493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.623518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.623695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.623723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.623935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.623961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.624122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.624147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.624335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.624361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.624574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.624606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.624786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.624811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.624935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.624979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.625158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.625186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.625369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.625394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.625526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.625570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.625721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.625749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.625930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.625956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.626116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.626141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.626299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.626341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.928 [2024-07-15 16:08:35.626495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.928 [2024-07-15 16:08:35.626520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.928 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.626691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.626718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.626862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.626897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.627086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.627111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.627286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.627314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.627481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.627509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.627666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.627692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.627813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.627854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.628039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.628067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.628224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.628251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.628427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.628455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.628626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.628654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.628846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.628871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.629060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.629088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.629237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.629265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.629446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.629471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.629678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.629706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.629891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.629920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.630079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.630104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.630238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.630262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.630439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.630467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.630670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.630695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.630830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.630855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.631021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.631047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.631169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.631194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.631327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.631369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.631538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.631565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.631758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.631786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.631955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.631981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.632115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.632140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.632336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.632361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.632547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.632575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.632711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.632739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.632916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.632941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.633072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.633114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.633318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.633346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.633496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.633521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.633728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.633756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.633924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.633953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.634132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.634157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.634346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.634371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.634551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.929 [2024-07-15 16:08:35.634576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.929 qpair failed and we were unable to recover it. 00:27:08.929 [2024-07-15 16:08:35.634745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.634770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.634906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.634932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.635111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.635139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.635296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.635322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.635479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.635521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.635727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.635755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.635909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.635935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.636103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.636131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.636311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.636338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.636514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.636538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.636712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.636740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.636951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.636979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.637143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.637167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.637342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.637370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.637546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.637575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.637778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.637803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.637985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.638017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.638174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.638202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.638410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.638435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.638574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.638602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.638774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.638802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.638999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.639025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.639233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.639261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.639466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.639491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.639686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.639711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.639890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.639919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.640114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.640142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.640350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.640375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.640581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.640609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.640813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.640838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.640972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.640998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.641170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.641198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.641411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.641437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.641566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.641592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.641774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.641802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.641951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.641977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.642115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.642140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.642309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.642337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.642513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.642541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.642731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.642756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.642918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.642944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.930 qpair failed and we were unable to recover it. 00:27:08.930 [2024-07-15 16:08:35.643133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.930 [2024-07-15 16:08:35.643158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.643316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.643341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.643486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.643518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.643697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.643725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.643895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.643921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.644071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.644099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.644274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.644302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.644483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.644509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.644682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.644709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.644860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.644896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.645073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.645098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.645235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.645260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.645418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.645444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.645605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.645631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.645764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.645807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.645958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.645987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.646140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.646165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.646343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.646370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.646537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.646565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.646770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.646796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.646943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.646971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.647147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.647175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.647354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.647379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.647558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.647586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.647761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.647789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.647962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.647988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.648160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.648188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.648364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.648392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.648598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.648623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.648800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.648828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.648983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.649012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.649193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.649218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.649421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.649449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.649627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.649655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.649807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.649832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.649967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.649993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.650156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.650181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.650310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.650336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.650540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.650568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.650702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.650730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.650966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.650991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.651152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-15 16:08:35.651192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.931 qpair failed and we were unable to recover it. 00:27:08.931 [2024-07-15 16:08:35.651369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.651397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.651561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.651586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.651766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.651794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.651993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.652021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.652204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.652229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.652373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.652401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.652603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.652631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.652815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.652840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.652990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.653018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.653192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.653221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.653378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.653403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.653579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.653607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.653805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.653833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.653997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.654023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.654196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.654224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.654374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.654402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.654577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.654603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.654777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.654805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.655015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.655041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.655186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.655212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.655380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.655408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.655581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.655609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.655807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.655833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.655991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.656017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.656142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.656184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.656339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.656364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.656534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.656562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.656731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.656759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.656904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.656936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.657118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.657146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.657354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.657379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.657563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.657588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.657790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.657817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.658017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.658045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.658261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.658286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.658441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.932 [2024-07-15 16:08:35.658469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.932 qpair failed and we were unable to recover it. 00:27:08.932 [2024-07-15 16:08:35.658644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.658672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.658883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.658909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.659068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.659097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.659306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.659334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.659521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.659546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.659681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.659708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.659862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.659908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.660065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.660090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.660243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.660268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.660452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.660480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.660652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.660677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.660860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.660895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.661046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.661074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.661257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.661281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.661432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.661457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.661614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.661642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.661820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.661845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.662029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.662058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.662231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.662259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.662429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.662458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.662662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.662690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.662865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.662900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.663056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.663081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.663201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.663226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.663404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.663432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.663608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.663633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.663814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.663842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.664009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.664038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.664211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.664236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.664407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.664435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.664611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.664639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.664844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.664871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.665102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.665127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.665284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.665312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.665492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.665518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.665671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.665700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.665889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.665932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.666060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.666085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.666300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.666328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.666526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.666554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.666704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.666729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.933 qpair failed and we were unable to recover it. 00:27:08.933 [2024-07-15 16:08:35.666903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-15 16:08:35.666931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.667075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.667102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.667286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.667311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.667466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.667507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.667680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.667709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.667867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.667912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.668095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.668123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.668301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.668329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.668509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.668534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.668715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.668743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.668922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.668951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.669109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.669134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.669307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.669334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.669533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.669561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.669774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.669800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.669982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.670010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.670158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.670187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.670342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.670368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.670540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.670569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.670772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.670801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.670964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.670989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.671113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.671138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.671288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.671316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.671495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.671520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.671673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.671701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.671881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.671910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.672084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.672109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.672259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.672287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.672491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.672519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.672684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.672709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.672868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.672899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.673117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.673142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.673328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.673353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.673535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.673563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.673764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.673792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.673979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.674005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.674154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.674182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.674324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.674351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.674500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.674526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.674684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.674727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.674875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.674913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.675102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.934 [2024-07-15 16:08:35.675128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.934 qpair failed and we were unable to recover it. 00:27:08.934 [2024-07-15 16:08:35.675307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.675335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.675479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.675507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.675710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.675735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.675901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.675933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.676086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.676114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.676275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.676300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.676458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.676483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.676625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.676652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.676832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.676856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.677026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.677051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.677205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.677233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.677444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.677469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.677646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.677674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.677822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.677850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.678037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.678062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.678202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.678230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.678407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.678435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.678613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.678638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.678811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.678839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.679027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.679055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.679197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.679222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.679398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.679426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.679565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.679593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.679786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.679811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.679958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.679987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.680176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.680204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.680346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.680371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.680575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.680603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.680752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.680779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.680932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.680958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.681110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.681151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.681289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.681321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.681509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.681534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.681694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.681723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.681900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.681941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.682101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.682126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.682314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.682341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.682518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.682545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.682699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.682726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.682887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.682938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.683080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.683108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.935 [2024-07-15 16:08:35.683311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.935 [2024-07-15 16:08:35.683336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.935 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.683486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.683514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.683653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.683681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.683826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.683854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.684024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.684049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.684210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.684235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.684396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.684420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.684575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.684600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.684736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.684761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.684924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.684949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.685118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.685146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.685325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.685351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.685485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.685510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.685691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.685719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.685900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.685935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.686111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.686138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.686324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.686349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.686507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.686536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.686709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.686737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.686933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.686962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.687134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.687159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.687337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.687365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.687566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.687593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.687732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.687759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.687943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.687969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.688104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.688130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.688295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.688320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.688474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.688502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.688710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.688736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.688913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.688955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.689130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.689158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.689348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.689375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.689581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.689606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.689758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.689786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.689928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.689956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.690132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.690160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.690341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.690366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.690547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.690575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.690773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.690801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.690955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.936 [2024-07-15 16:08:35.690980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.936 qpair failed and we were unable to recover it. 00:27:08.936 [2024-07-15 16:08:35.691110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.691135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.691392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.691448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.691646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.691674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.691848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.691892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.692080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.692105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.692334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.692391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.692565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.692593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.692767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.692794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.692955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.692981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.693159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.693188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.693388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.693416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.693563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.693591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.693744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.693769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.693926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.693967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.694144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.694172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.694324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.694352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.694534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.694559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.694734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.694762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.694934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.694962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.695131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.695159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.695313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.695338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.695543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.695571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.695720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.695748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.695934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.695963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.696141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.696166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.696328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.696390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.696562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.696590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.696798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.696826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.696984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.697010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.697143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.697167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.697323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.697348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.697499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.697527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.697706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.697731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.697948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.698005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.698171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.698199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.698351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.698379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.937 [2024-07-15 16:08:35.698552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.937 [2024-07-15 16:08:35.698577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.937 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.698746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.698774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.698951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.698979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.699151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.699179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.699336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.699361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.699515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.699540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.699720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.699747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.699926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.699955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.700143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.700168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.700327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.700357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.700511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.700539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.700714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.700741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.700905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.700931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.701091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.701116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.701275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.701303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.701480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.701508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.701714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.701739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.701952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.701978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.702113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.702139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.702267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.702292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.702448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.702473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.702613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.702641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.702818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.702846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.703035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.703063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.703249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.703274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.703451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.703478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.703624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.703652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.703820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.703848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.704016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.704042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.704246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.704274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.704448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.704473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.704626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.704666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.704818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.704843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.705055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.705084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.705238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.705266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.705442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.705470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.705653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.705682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.705892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.705921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.706060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.706088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.706301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.706326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.706451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.706476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.706677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.938 [2024-07-15 16:08:35.706705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.938 qpair failed and we were unable to recover it. 00:27:08.938 [2024-07-15 16:08:35.706888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.706916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.707095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.707123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.707299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.707323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.707561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.707591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.707775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.707803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.707979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.708008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.708167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.708192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.708375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.708400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.708593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.708621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.708789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.708817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.708972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.708998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.709153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.709178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.709387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.709415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.709580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.709608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.709786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.709811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.709981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.710009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.710190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.710215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.710378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.710419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.710569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.710593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.710717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.710742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.710892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.710921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.711071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.711103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.711280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.711306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.711510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.711538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.711706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.711733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.711890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.711919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.712096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.712121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.712268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.712297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.712498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.712526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.712714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.712742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.712890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.712916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.713117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.713145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.713279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.713307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.713477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.713505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.713684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.713725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.713763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f40e0 (9): Bad file descriptor 00:27:08.939 [2024-07-15 16:08:35.714056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.714096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.714314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.714344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.714560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.714585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.714871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.714935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.715118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.715144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.715285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.939 [2024-07-15 16:08:35.715311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.939 qpair failed and we were unable to recover it. 00:27:08.939 [2024-07-15 16:08:35.715495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.715519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.715726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.715753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.715910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.715936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.716112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.716140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.716347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.716372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.716533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.716558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.716729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.716757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.716920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.716949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.717127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.717153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.717331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.717359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.717543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.717568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.717753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.717777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.717971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.718000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.718182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.718212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.718375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.718400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.718534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.718577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.718747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.718775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.718950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.718975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.719112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.719137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.719297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.719322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.719479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.719509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.719638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.719663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.719824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.719866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.720051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.720076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.720253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.720281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.720483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.720511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.720660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.720685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.720862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.720896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.721070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.721095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.721226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.721251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.721428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.721457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.721634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.721662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.721874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.721904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.722056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.722084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.722285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.722310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.722475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.722500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.722702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.722767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.722952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.722980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.723139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.723164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.723286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.723311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.723525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-15 16:08:35.723553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.940 qpair failed and we were unable to recover it. 00:27:08.940 [2024-07-15 16:08:35.723742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.723767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.723931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.723956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.724136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.724166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.724318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.724344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.724477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.724519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.724694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.724722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.724905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.724930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.725104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.725132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.725308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.725338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.725515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.725540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.725666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.725708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.725886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.725916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.726060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.726086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.726303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.726331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.726507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.726536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.726716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.726741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.726950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.726978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.727134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.727161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.727378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.727403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.727582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.727614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.727812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.727840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.728018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.728044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.728249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.728277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.728417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.728445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.728598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.728623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.728756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.728798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.728982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.729010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.729191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.729217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1262655 Killed "${NVMF_APP[@]}" "$@" 00:27:08.941 [2024-07-15 16:08:35.729385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.729414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.729589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.729617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.729794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.729822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:08.941 [2024-07-15 16:08:35.730043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.730069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.730234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:08.941 [2024-07-15 16:08:35.730278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 [2024-07-15 16:08:35.730429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-15 16:08:35.730454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.941 qpair failed and we were unable to recover it. 00:27:08.941 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:08.941 [2024-07-15 16:08:35.730638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.730664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:08.942 [2024-07-15 16:08:35.730870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.730906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.942 [2024-07-15 16:08:35.731054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.731080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.731290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.731318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.731494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.731522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.731681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.731707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.731841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.731867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.732036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.732064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.732251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.732276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.732449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.732481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.732653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.732681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.732891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.732916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.733074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.733108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.733293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.733320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.733505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.733530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.733694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.733719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.733896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.733926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.734108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.734134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.734338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.734366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.734567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.734595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.734776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.734801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.734955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.734983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.735123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.735152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.735366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.735391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.735550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.735575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1263144 00:27:08.942 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:08.942 [2024-07-15 16:08:35.735755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.735783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1263144 00:27:08.942 [2024-07-15 16:08:35.735970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.735997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1263144 ']' 00:27:08.942 [2024-07-15 16:08:35.736176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.736205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.942 [2024-07-15 16:08:35.736380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.736407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:08.942 [2024-07-15 16:08:35.736567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.942 [2024-07-15 16:08:35.736594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:08.942 [2024-07-15 16:08:35.736808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.736837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 16:08:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.942 [2024-07-15 16:08:35.736992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.737017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.737205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.737230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.737408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.737436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.737575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.737604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.737758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.942 [2024-07-15 16:08:35.737785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.942 qpair failed and we were unable to recover it. 00:27:08.942 [2024-07-15 16:08:35.737972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.738002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.738143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.738171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.738351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.738377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.738566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.738594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.738762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.738790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.738939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.738965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.739166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.739194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.739348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.739376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.739559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.739584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.739729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.739754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.739919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.739964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.740119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.740144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.740310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.740335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.740523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.740551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.740760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.740785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.740959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.740987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.741124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.741152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.741336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.741362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.741524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.741549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.741725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.741753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.741941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.741967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.742149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.742179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.742355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.742389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.742595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.742620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.742756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.742782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.742946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.742989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.743196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.743221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.743403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.743431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.743601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.743629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.743774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.743799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.743955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.743997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.744143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.744172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.744380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.744405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.744584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.744612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.744753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.744781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.744995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.745020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.745149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.745191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.745401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.745429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.745610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.745636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.745810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.745837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.943 [2024-07-15 16:08:35.746019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.943 [2024-07-15 16:08:35.746048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.943 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.746248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.746274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.746454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.746483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.746631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.746659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.746839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.746865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.747053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.747081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.747225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.747252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.747402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.747427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.747595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.747624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.747828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.747857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.748053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.748078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.748257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.748284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.748486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.748514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.748695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.748720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.748942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.748968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.749098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.749123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.749325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.749352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.749498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.749526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.749677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.749705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.749893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.749920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.750127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.750155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.750360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.750388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.750537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.750566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.750748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.750775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.750923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.750952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.751113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.751139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.751300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.751325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.751486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.751510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.751646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.751671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.751833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.751858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.752029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.752058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.752238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.752263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.752445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.752473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.752671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.752699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.752856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.752887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.753077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.753102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.753315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.753340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.753496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.753521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.753769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.753818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.754018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.754046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.754204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.754231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.944 [2024-07-15 16:08:35.754399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.944 [2024-07-15 16:08:35.754424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.944 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.754579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.754620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.754831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.754856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.755020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.755048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.755220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.755248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.755406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.755432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.755601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.755626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.755751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.755776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.755946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.755986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.756121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.756148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.756336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.756362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.756549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.756593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.756772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.756817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.756986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.757013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.757176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.757203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.757380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.757408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.757585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.757613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.757793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.757820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.758007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.758032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.758195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.758221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.758553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.758607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.758759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.758791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.758977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.759003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.759184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.759208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.759348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.759373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.759530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.759555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.759756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.759783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.759971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.759998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.760172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.760200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.760374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.760401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.760578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.760606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.760754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.760782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.760958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.760983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.761112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.761138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.761333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.761361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.761551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.945 [2024-07-15 16:08:35.761580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.945 qpair failed and we were unable to recover it. 00:27:08.945 [2024-07-15 16:08:35.761756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.761786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.761966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.761992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.762156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.762181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.762394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.762422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.762600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.762629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.762838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.762866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.763042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.763066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.763242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.763270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.763540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.763598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.763805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.763832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.764022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.764048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.764204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.764229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.764389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.764417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.764583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.764611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.764810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.764838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.765019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.765044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.765205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.765246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.765454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.765479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.765660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.765690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.765870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.765903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.766085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.766109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.766318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.766346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.766514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.766542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.766741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.766769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.766957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.766983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.767135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.767183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.767368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.767393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.767598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.767626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.767769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.767797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.768001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.768027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.768197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.768225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.768426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.768453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.768763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.768810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.769003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.769029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.769210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.769238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.769441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.769466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.769650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.769678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.769896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.769922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.770071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.770097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.770256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.770283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.770427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.946 [2024-07-15 16:08:35.770456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.946 qpair failed and we were unable to recover it. 00:27:08.946 [2024-07-15 16:08:35.770637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.770664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.770835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.770863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.771071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.771096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.771235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.771262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.771411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.771438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.771578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.771606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.771804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.771832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.772057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.772082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.772244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.772272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.772447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.772476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.772632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.772674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.772852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.772886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.773068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.773093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.773270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.773299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.773465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.773493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.773668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.773693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.773900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.773929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.774129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.774156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.774339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.774364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.774524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.774549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.774726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.774755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.774914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.774940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.775099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.775124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.775282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.775310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.775465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.775493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.775670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.775698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.775885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.775934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.776065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.776091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.776262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.776290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.776479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.776506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.776658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.776683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.776851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.776895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.777069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.777097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.777270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.777295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.777428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.777454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.777590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.777615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.777747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.777772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.777901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.777927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.778083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.778108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.778269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.778293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.778475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.947 [2024-07-15 16:08:35.778503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.947 qpair failed and we were unable to recover it. 00:27:08.947 [2024-07-15 16:08:35.778645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.778673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.778861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.778897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.779044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.779073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.779268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.779296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.779477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.779502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.779654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.779682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.779854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.779887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.780072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.780098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.780270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.780297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.780460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.780488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.780644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.780669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.780838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.780866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.781098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.781123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.781285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.781310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.781516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.781544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.781740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.781767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.781968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.781994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.782204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.782232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.782374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.782402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.782586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.782611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.782760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.782787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.782977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.783003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.783160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.783185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.783363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.783396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.783575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.783603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.783769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.783793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.783923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.783969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.784151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.784181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.784329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.784354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.784358] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:08.948 [2024-07-15 16:08:35.784434] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.948 [2024-07-15 16:08:35.784504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.784546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.784724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.784750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.784936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.784962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.785188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.785214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.785397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.785422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.785614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.785639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.785803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.785827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.786018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.786047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.786209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.786234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.786392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.786418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.786580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.948 [2024-07-15 16:08:35.786623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.948 qpair failed and we were unable to recover it. 00:27:08.948 [2024-07-15 16:08:35.786778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.786804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.786975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.787004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.787188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.787215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.787368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.787394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.787549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.787577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.787789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.787814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.787945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.787971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.788150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.788178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.788381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.788410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.788594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.788620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.788799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.788828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.788978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.789007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.789184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.789210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.789346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.789390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.789569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.789597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.789798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.789826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.790017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.790043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.790198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.790223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.790381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.790406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.790567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.790593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.790751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.790793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.790967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.790993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.791200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.791232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.791410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.791438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.791619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.791644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.791822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.791849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.792029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.792055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.792219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.792245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.792416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.792441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.792572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.792597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.792784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.792809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.792949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.792974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.793135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.793160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.793329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.793356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.793537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.949 [2024-07-15 16:08:35.793565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.949 qpair failed and we were unable to recover it. 00:27:08.949 [2024-07-15 16:08:35.793739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.793767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.793946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.793972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.794134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.794160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.794314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.794339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.794491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.794516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.794664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.794692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.794833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.794863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.795080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.795106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.795303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.795331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.795540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.795568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.795817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.795845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.796035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.796060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.796191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.796216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.796369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.796394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.796549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.796577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.796756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.796783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.796942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.796968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.797154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.797179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.797369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.797398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.797562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.797587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.797746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.797772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.798977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.799013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.799222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.799249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.799424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.799453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.799602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.799631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.799804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.799830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.799993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.800019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.800206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.800240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.800413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.800442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.800575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.800603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.800803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.800833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.801027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.801053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.801196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.801223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.801379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.801405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.801584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.801612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.801760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.801788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.802008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.802034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.802217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.802247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.802569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.802619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.802887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.802931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.803062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.803087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.803289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.803315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.950 [2024-07-15 16:08:35.803633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.950 [2024-07-15 16:08:35.803701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.950 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.803921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.803947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.804080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.804105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.804292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.804317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.804502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.804531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.804786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.804838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.805014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.805041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.805202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.805227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.805454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.805483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.805680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.805723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.805863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.805896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.806086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.806112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.806284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.806310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.806492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.806518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.806678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.806704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.806872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.806902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.807067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.807093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.807280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.807306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.807486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.807511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.807673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.807699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.807906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.807932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.808089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.808116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.808276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.808301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.808507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.808562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.808770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.808800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.808959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.808990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.809158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.809196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.809436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.809464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.809662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.809690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.809881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.809907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.810094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.810122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.810337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.810365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.810594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.810643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.810813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.810838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.811022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.811050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.811263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.811291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.811469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.811498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.811680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.811705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.811838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.811863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.812027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.812054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.812245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.812272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.812537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.812565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.812753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.812778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.812956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.812984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.813152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.951 [2024-07-15 16:08:35.813179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.951 qpair failed and we were unable to recover it. 00:27:08.951 [2024-07-15 16:08:35.813405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.813433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.813638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.813663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.813806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.813831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.813996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.814024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.814225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.814258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.814530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.814579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.814780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.814806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.815008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.815036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.815217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.815244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.815481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.815511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.815677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.815702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.815869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.815900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.816065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.816106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.816254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.816279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.816429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.816454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.816612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.816649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.816804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.816829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.817049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.817078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.817274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.817302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.817514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.817542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.817726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.817756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.817948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.817976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.818150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.818177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.818374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.818401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.818562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.818587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:08.952 [2024-07-15 16:08:35.818719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.952 [2024-07-15 16:08:35.818744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:08.952 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.818886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.818912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.819091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.819118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.819337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.819383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.819575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.819600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.819730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.819761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.819957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.820002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.820207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.820238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.820393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.820420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.241 [2024-07-15 16:08:35.820584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.820610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.820746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.820772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.820924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.820954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.821122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.821166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.821355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.821404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.821581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.821607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.821768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.821794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.241 qpair failed and we were unable to recover it. 00:27:09.241 [2024-07-15 16:08:35.821965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.241 [2024-07-15 16:08:35.821994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.822193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.822221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.822555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.822607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.822772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.822798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.822964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.822992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.823136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.823181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.823424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.823454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.823609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.823635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.823822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.823848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.824000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.824026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.824207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.824233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.824426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.824452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.824585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.824611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.824775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.824801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.824934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.824960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.825104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.825130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.825297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.825323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.825457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.825483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.825647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.825674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.825860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.825899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.826037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.826064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.826193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.826219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.826361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.826387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.826572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.826598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.826776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.826802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.826949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.826978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.827143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.827169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.827332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.827359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.827485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.827511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.827687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.827712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.827852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.827893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.828057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.828084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.828253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.828279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.828439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.828465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.828644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.828670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.828823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.828850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.828996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.829023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.829153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.829178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.829335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.829361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.829501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.829529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.242 [2024-07-15 16:08:35.829667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.242 [2024-07-15 16:08:35.829694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.242 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.829872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.829916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.830063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.830090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.830287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.830313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.830471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.830499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.830635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.830660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.830837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.830884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.831033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.831060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.831204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.831235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.831364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.831389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.831509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.831534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.831693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.831718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.831894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.831933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.832101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.832128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.832276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.832301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.832457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.832482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.832615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.832642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.832811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.832837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.832998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.833025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.833172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.833204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.833392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.833417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.833581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.833606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.833764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.833788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.833935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.833960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.834098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.834123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.834262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.834288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.834487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.834513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.834649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.834675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.834841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.834867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.835057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.835083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.835217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.835243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.835399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.835424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.835614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.835651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.835819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.835844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.835995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.836022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.836189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.836215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.836340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.836366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.836548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.836574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.836761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.836787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.836988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.837013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.837144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.837169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.243 [2024-07-15 16:08:35.837308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.243 [2024-07-15 16:08:35.837336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.243 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.837517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.837552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.837733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.837758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.837938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.837964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.838095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.838120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.838251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.838277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.838460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.838485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.838641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.838667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.838807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.838833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.838969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.838995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.839125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.839150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.839348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.839374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.839533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.839569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.839724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.839750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.839940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.839966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.840102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.840127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.840291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.840316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.840447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.840473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.840629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.840660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.840817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.840850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.840984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.841010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.841171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.841196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.841323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.841348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.841509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.841534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.841679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.841705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.841864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.841899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.842035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.842061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.842190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.842216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.842375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.842404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.842535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.842560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.842748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.842773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.842924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.842950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.843143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.843169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.843313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.843339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.843502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.843526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.843681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.843706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.843872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.843902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.844039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.844064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.844249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.844274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.844439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.844463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.844647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.844672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.244 qpair failed and we were unable to recover it. 00:27:09.244 [2024-07-15 16:08:35.844833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.244 [2024-07-15 16:08:35.844858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.845012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.845037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.845162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.845188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.845325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.845350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.845505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.845542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.845746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.845774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.845942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.845969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.846105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.846132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.846297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.846322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.846465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.846492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.846660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.846686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.846845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.846870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.847002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.847028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.847205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.847230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.847395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.847420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.847582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.847608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.847773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.847799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.847947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.847979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.848140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.848166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.848335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.848362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.848530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.848557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.848703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.848729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.848864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.848898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.849091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.849116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.849296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.849322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.849506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.849531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.849694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.849720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.849885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.849912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.850054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.850080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.850234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.850260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.850419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.850444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.850615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.850641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.850799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.245 [2024-07-15 16:08:35.850825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.245 qpair failed and we were unable to recover it. 00:27:09.245 [2024-07-15 16:08:35.851001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.851028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.851188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.851214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.851403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.851429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.851616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.851642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.851806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.851832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.851974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.852001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.852163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.852189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.852370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.852396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.852554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.852579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.852705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.852731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.852926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.852954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.853109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.853148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.853321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.853358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.853494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.853520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.853649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.853674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.853813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.853845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.854020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.854047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.854209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.854236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.854393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.854418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.854569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.854595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.854762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.854787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.854947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.854974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.855128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.855154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.855335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.855361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.855544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.855569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.855707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.855732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.855895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.855921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.856057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.856057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:09.246 [2024-07-15 16:08:35.856083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.856245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.856271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.856436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.856461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.856598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.856623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.856749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.856774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.856963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.857002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.857171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.857199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.857388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.857414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.857552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.857580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.857718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.857755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.857917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.857944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.858087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.858113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.858308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.858334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.858527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.246 [2024-07-15 16:08:35.858553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.246 qpair failed and we were unable to recover it. 00:27:09.246 [2024-07-15 16:08:35.858718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.858745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.858938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.858965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.859094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.859120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.859259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.859284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.859459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.859485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.859616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.859643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.859810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.859836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.859985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.860011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.860179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.860206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.860368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.860396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.860561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.860591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.860726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.860751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.860915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.860941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.861078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.861104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.861231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.861256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.861416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.861441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.861602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.861637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.861800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.861825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.861993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.862019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.862148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.862174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.862320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.862345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.862472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.862497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.862673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.862698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.862836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.862869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.863036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.863063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.863198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.863233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.863387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.863413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.863600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.863626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.863763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.863790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.863948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.863976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.864106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.864132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.864362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.864387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.864572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.864597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.864756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.864781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.864912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.864938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.865073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.865099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.865269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.865294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.865450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.865479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.247 [2024-07-15 16:08:35.865645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.247 [2024-07-15 16:08:35.865670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.247 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.865853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.865884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.866056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.866081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.866265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.866290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.866477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.866513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.866674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.866700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.866898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.866924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.867065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.867091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.867286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.867312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.867447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.867472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.867673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.867699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.867831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.867857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.868049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.868075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.868253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.868279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.868418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.868443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.868621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.868647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.868780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.868806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.869002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.869028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.869190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.869215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.869350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.869375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.869634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.869659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.869816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.869842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.870020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.870046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.870207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.870234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.870387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.870412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.870575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.870600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.870757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.870787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.870911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.870937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.871067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.871093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.871286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.871311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.871472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.871498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.871689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.871714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.871883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.871910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.872068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.872093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.872214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.872240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.872396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.872422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.872558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.872583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.872714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.872740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.872906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.872932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.873065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.873089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.873251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.873277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.873404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.248 [2024-07-15 16:08:35.873432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.248 qpair failed and we were unable to recover it. 00:27:09.248 [2024-07-15 16:08:35.873595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.873620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.873783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.873809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.873975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.874001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.874155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.874180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.874365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.874391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.874557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.874582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.874716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.874743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.874898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.874924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.875089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.875115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.875252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.875277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.875413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.875438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.875623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.875651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.875789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.875815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.875989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.876015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.876152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.876177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.876304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.876329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.876467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.876493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.876626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.876651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.876806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.876832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.877013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.877039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.877183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.877209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.877379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.877404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.877567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.877593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.877726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.877752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.877910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.877936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.878125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.878178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.878328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.878356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.878498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.878526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.878723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.878751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.878913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.878940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.879080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.879106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.879271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.879297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.879434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.879460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.879619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.879645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.879810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.879835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.879975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.880002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.880140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.880167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.880316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.880344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.880481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.880510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.880692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.880717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.880880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.249 [2024-07-15 16:08:35.880906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.249 qpair failed and we were unable to recover it. 00:27:09.249 [2024-07-15 16:08:35.881066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.881092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.881230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.881258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.881441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.881467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.881629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.881656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.881816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.881843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.882023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.882049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.882207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.882232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.882367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.882392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.882550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.882576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.882730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.882757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.882894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.882921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.883096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.883137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.883345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.883373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.883561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.883587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.883793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.883819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.883968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.883995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.884173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.884200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.884362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.884395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.884560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.884586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.884777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.884803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.884974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.885001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.885181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.885207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.885390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.885416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.885578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.885604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.885767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.885799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.885971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.886000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.886160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.886192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.886319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.886345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.886522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.886556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.886725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.886752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.887676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.887706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.887908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.887936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.888097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.250 [2024-07-15 16:08:35.888124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.250 qpair failed and we were unable to recover it. 00:27:09.250 [2024-07-15 16:08:35.888300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.888326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.888494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.888519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.888666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.888691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.888828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.888854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.889007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.889034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.889231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.889257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.889425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.889451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.889639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.889665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.889809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.889850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.890019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.890046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.890236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.890262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.890399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.890425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.890586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.890618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.890789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.890815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.891004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.891032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.891178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.891204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.891367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.891392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.891534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.891560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.891691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.891721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.891888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.891915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.892079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.892104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.892230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.892256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.892408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.892434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.892603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.892631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.892815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.892841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.892980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.893007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.893197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.893222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.893348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.893373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.893557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.893582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.893713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.893739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.893870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.893901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.894038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.894063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.894233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.894259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.894465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.894490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.894620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.894646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.894812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.894838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.894974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.895001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.895135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.895160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.895326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.895352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.895533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.895559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.895713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.251 [2024-07-15 16:08:35.895738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.251 qpair failed and we were unable to recover it. 00:27:09.251 [2024-07-15 16:08:35.895864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.895905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.896072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.896098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.896237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.896272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.896457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.896483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.896610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.896639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.896806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.896832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.896990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.897017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.897149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.897175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.897362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.897388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.897517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.897543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.897671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.897696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.897825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.897851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.898020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.898061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.898212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.898241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.898427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.898453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.898612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.898638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.898797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.898823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.898986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.899013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.899188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.899216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.899405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.899431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.899617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.899643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.899770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.899796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.899953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.899980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.900113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.900138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.900307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.900334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.900493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.900519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.900686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.900711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.900899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.900927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.901088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.901115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.901275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.901301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.901430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.901456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.901609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.901639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.901781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.901807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.901970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.901996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.902157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.902190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.902357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.902383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.902544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.902573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.902741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.902767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.902907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.902935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.903103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.903130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.903323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.252 [2024-07-15 16:08:35.903349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.252 qpair failed and we were unable to recover it. 00:27:09.252 [2024-07-15 16:08:35.903505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.903531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.903669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.903695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.903849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.903894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.904062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.904088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.904286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.904312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.904471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.904497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.904681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.904714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.904866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.904900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.905024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.905050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.905186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.905213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.905401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.905427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.905591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.905617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.905783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.905808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.905946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.905974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.906109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.906135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.906325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.906351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.906497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.906525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.906694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.906719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.906856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.906900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.907066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.907092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.907224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.907250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.907382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.907407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.907591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.907617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.907757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.907783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.907947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.907973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.908134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.908160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.908318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.908345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.908508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.908533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.908697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.908726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.908895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.908922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.909077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.909108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.909275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.909301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.909457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.909484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.909614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.909641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.909829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.909856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.910030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.910056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.910216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.910254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.910429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.910455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.910652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.910677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.910812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.910838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.911005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.253 [2024-07-15 16:08:35.911033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.253 qpair failed and we were unable to recover it. 00:27:09.253 [2024-07-15 16:08:35.911202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.911228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.911364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.911390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.911571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.911597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.911761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.911787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.911949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.911977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.912137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.912163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.912329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.912355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.912518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.912554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.912716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.912743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.912931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.912957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.913093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.913119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.913250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.913276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.913403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.913430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.913589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.913626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.913798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.913825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.913972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.913998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.914162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.914199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.914401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.914429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.914588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.914613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.914771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.914796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.914953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.914978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.915115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.915141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.915302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.915328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.915487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.915512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.915695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.915720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.915849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.915890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.916050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.916075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.916208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.916234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.916411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.916436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.916589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.916615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.916815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.916841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.916986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.917012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.917166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.917192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.917322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.917359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.254 qpair failed and we were unable to recover it. 00:27:09.254 [2024-07-15 16:08:35.917535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.254 [2024-07-15 16:08:35.917566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.917828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.917857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.918026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.918054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.918214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.918240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.918436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.918462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.918648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.918675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.918897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.918924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.919062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.919088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.919264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.919290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.919461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.919488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.919647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.919673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.919853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.919888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.920032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.920058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.920221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.920247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.920409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.920435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.920649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.920675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.920808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.920835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.921014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.921040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.921172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.921209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.921371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.921398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.921538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.921566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.921737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.921764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.921908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.921936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.922079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.922105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.922246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.922272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.922432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.922458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.922693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.922719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.922855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.922894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.923024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.923050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.923206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.923232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.923392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.923417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.923547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.923573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.923757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.923782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.923944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.923969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.924133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.924158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.924321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.924347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.924505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.924531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.924666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.924692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.924854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.924894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.925056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.925083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.925246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.925272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.255 qpair failed and we were unable to recover it. 00:27:09.255 [2024-07-15 16:08:35.925511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.255 [2024-07-15 16:08:35.925537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.925729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.925755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.925918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.925945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.926081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.926107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.926246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.926271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.926439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.926464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.926595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.926620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.926785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.926811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.926950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.926976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.927115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.927141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.927342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.927368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.927498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.927524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.927660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.927686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.927818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.927843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.928010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.928036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.928199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.928224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.928407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.928433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.928614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.928640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.928768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.928794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.928923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.928950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.929132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.929158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.929319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.929345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.929503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.929533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.929692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.929717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.929840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.929866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.930009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.930035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.930190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.930216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.930376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.930402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.930557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.930582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.930749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.930775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.930943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.930970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.931104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.931130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.931265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.931291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.931450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.931477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.931665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.931690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.931826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.931851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.931999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.932024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.932212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.932238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.932371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.932396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.932560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.932585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.932740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.932766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.932955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.256 [2024-07-15 16:08:35.932981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.256 qpair failed and we were unable to recover it. 00:27:09.256 [2024-07-15 16:08:35.933138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.933164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.933296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.933320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.933486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.933512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.933694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.933720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.933855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.933897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.934059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.934084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.934213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.934239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.934366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.934395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.934520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.934545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.934670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.934696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.934830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.934857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.935055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.935081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.935204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.935229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.935361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.935386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.935519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.935544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.935727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.935752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.935896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.935922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.936053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.936078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.936214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.936249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.936413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.936439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.936600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.936625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.936793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.936819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.936989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.937015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.937173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.937198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.937353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.937378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.937531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.937557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.937737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.937764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.937947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.937973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.938137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.938163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.938298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.938323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.938515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.938541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.938699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.938724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.938913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.938939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.939096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.939121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.939281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.939311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.939436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.939461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.939605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.939631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.939788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.939814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.939988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.940013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.940138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.940163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.940346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.940371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-07-15 16:08:35.940522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.257 [2024-07-15 16:08:35.940547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.940671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.940696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.940860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.940899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.941056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.941081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.941238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.941264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.941428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.941454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.941616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.941641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.941818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.941843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.942006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.942032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.942202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.942228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.942389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.942415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.942590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.942616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.942777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.942803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.942967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.942993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.943151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.943176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.943332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.943357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.943480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.943506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.943693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.943718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.943852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.943893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.944046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.944071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.944231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.944256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.944433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.944458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.944648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.944672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.944844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.944869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.945030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.945056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.945215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.945240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.945431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.945456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.945587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.945613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.945766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.945791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.945954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.945980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.946125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.946150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.946294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.946321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.946504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.946529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.946680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.946706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.946868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.946900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-07-15 16:08:35.947074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.258 [2024-07-15 16:08:35.947100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.947260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.947286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.947468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.947493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.947627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.947653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.947806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.947831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.948022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.948048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.948210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.948236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.948401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.948426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.948554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.948580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.948737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.948762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.948945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.948972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.949105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.949130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.949285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.949310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.949465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.949491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.949643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.949669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.949826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.949852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.950014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.950041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.950196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.950222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.950372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.950398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.950567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.950593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.950718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.950743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.950884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.950910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.951036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.951062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.951202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.951227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.951388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.951413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.951538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.951564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.951696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.951725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.951898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.951925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.952058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.952085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.952217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.952242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.952406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.952432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.952587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.952614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.952780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.952806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.952938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.952963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.953104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.953130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.953283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.953308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.953462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.953488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.953614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.953639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.953800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.953825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.953956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.953982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.954122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.954148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.954278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.259 [2024-07-15 16:08:35.954303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-07-15 16:08:35.954487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.954512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.954671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.954697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.954886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.954911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.955066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.955092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.955256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.955282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.955453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.955479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.955632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.955658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.955840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.955865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.956013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.956039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.956198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.956226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.956379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.956405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.956558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.956588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.956761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.956787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.956943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.956969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.957111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.957136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.957332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.957357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.957518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.957543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.957672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.957698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.957858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.957898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.958031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.958057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.958187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.958213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.958363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.958388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.958525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.958551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.958707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.958732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.958856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.958899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.959032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.959058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.959187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.959214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.959401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.959426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.959555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.959581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.959735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.959761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.959915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.959941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.960103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.960128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.960262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.960287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.960444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.960469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.960650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.960676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.960803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.960828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.960998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.961024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.961180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.961205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.961335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.961360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.961516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.961542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.961699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.961724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.260 [2024-07-15 16:08:35.961853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.260 [2024-07-15 16:08:35.961890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.260 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.962056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.962082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.962268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.962293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.962447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.962472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.962653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.962678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.962809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.962834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.962997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.963023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.963151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.963175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.963332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.963358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.963494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.963520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.963652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.963678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.963840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.963866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.964015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.964040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.964204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.964230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.964387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.964412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.964565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.964591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.964724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.964749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.964898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.964924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.965085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.965110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.965294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.965320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.965503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.965530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.965684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.965710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.965839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.965865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.966033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.966058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.966181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.966207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.966349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.966377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.966533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.966558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.966683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.966709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.966864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.966895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.967025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.967051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.967186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.967212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.967376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.967402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.967556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.967581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.967746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.967771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.967896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.967922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.968057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.968082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.968216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.968242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.968376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.968401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.968555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.968586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.968749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.968774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.968931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.968956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.261 qpair failed and we were unable to recover it. 00:27:09.261 [2024-07-15 16:08:35.969112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.261 [2024-07-15 16:08:35.969137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.969271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.969296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.969483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.969508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.969632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.969657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.969810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.969835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.969983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.970008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.970169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.970194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.970344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.970369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.970502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.970528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.970691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.970716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.970868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.970899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.971077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.971103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.971266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.971292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.971444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.971469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.971624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.971649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.971781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.971806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.971941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.971967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.972125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.972150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.972309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.972334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.972462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.972487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.972608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.972633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.972788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.972831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.972989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.973030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.973171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.973180] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.262 [2024-07-15 16:08:35.973198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.262 [2024-07-15 16:08:35.973215] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.973231] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.262 [2024-07-15 16:08:35.973243] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.262 [2024-07-15 16:08:35.973253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.262 [2024-07-15 16:08:35.973321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:09.262 [2024-07-15 16:08:35.973415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.973457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.262 [2024-07-15 16:08:35.973368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.973465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:09.262 [2024-07-15 16:08:35.973468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:09.262 [2024-07-15 16:08:35.973595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.973620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.973756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.973782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.973926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.973954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.974117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.974142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.974281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.974308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.974436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.974462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.974625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.974650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.974808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.974833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.974983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.975009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.975140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.975165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.975324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.975350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.975510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.975535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.975681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.975706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.262 [2024-07-15 16:08:35.975867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.262 [2024-07-15 16:08:35.975897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.262 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.976036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.976065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.976202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.976230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.976445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.976471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.976629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.976655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.976819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.976845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.976994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.977021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.977203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.977229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.977387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.977412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.977538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.977563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.977725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.977751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.977881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.977907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.978044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.978069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.978198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.978223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.978357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.978382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.978515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.978540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.978767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.978792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.978967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.978992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.979179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.979204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.979351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.979376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.979542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.979567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.979697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.979722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.979852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.979899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.980028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.980053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.980212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.980237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.980368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.980393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.980518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.980543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.980678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.980704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.980836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.980861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.981004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.981031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.981162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.981187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.981314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.981339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.981489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.981515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.981740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.981765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.981925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.981950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.263 [2024-07-15 16:08:35.982141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.263 [2024-07-15 16:08:35.982182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.263 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.982351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.982379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.982524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.982550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.982713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.982739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.982902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.982929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.983089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.983115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.983251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.983277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.983443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.983469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.983634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.983659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.983892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.983919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.984068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.984094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.984219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.984244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.984400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.984425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.984552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.984577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.984726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.984752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.984901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.984949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.985102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.985130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.985290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.985316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.985441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.985467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.985629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.985655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.985811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.985852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.986104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.986131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.986296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.986321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.986454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.986479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.986634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.986660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.986792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.986817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.986968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.986995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.987123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.987148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.987305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.987330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.987476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.987515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.987679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.987707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.987868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.987901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.988060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.988086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.988213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.988239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.988411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.988437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.988595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.988622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.988777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.988802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.988934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.988960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.989197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.989222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.989477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.989502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.989633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.989658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.264 [2024-07-15 16:08:35.989802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.264 [2024-07-15 16:08:35.989827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.264 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.989966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.989999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.990149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.990174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.990332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.990357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.990513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.990538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.990670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.990695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.990851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.990881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.991019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.991044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.991284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.991310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.991434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.991459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.991583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.991609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.991757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.991782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.991917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.991943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.992073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.992098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.992228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.992253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.992416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.992441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.992598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.992623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.992755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.992781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.992913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.992939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.993065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.993090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.993218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.993243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.993400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.993425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.993579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.993604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.993756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.993781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.993941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.993968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.994094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.994119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.994253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.994278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.994410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.994435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.994589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.994615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.994746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.994771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.994907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.994933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.995080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.995121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.995286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.995314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.995449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.995477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.995605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.995632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.995767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.995794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.995947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.995973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.996151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.996178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.996303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.996329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.996490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.996516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.996660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.996686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.996819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.265 [2024-07-15 16:08:35.996845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.265 qpair failed and we were unable to recover it. 00:27:09.265 [2024-07-15 16:08:35.997012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.997052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.997217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.997244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.997402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.997429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.997669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.997696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.997868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.997901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.998063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.998090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.998234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.998260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.998392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.998418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.998576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.998602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.998736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.998763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.998939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.998980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.999123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.999150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.999279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.999305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.999434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.999466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.999639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.999665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.999804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.999830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:35.999964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:35.999991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.000217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.000243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.000376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.000402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.000559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.000584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.000724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.000750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.000883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.000933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.001067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.001093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.001217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.001242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.001399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.001425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.001586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.001612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.001736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.001761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.001922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.001961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.002102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.002131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.002274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.002300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.002463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.002489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.002614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.002640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.002763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.002789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.002953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.002980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.003113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.003138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.003268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.003294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.003431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.003456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.003632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.003670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.003800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.003827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.266 [2024-07-15 16:08:36.004067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.266 [2024-07-15 16:08:36.004095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.266 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.004230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.004257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.004429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.004455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.004600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.004625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.004790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.004816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.004962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.004990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.005119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.005146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.005309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.005335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.005493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.005519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.005666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.005705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.005847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.005881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.006016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.006043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.006177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.006203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.006338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.006365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.006509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.006540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.006667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.006693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.006883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.006923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.007178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.007205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.007394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.007419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.007548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.007574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.007810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.007835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.007972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.007999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.008118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.008144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.008282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.008308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.008472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.008497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.008628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.008654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.008825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.008850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.008985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.009011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.009179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.009205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.009342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.009367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.009490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.009516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.009651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.009676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.009859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.009892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.010055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.010081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.267 [2024-07-15 16:08:36.010210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.267 [2024-07-15 16:08:36.010235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.267 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.010361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.010386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.010507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.010532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.010658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.010683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.010896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.010922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.011063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.011089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.011214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.011239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.011383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.011409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.011583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.011608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.011738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.011763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.011894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.011920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.012046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.012072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.012209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.012234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.012363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.012389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.012513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.012539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.012695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.012720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.012850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.012880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.013052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.013078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.013204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.013229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.013352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.013377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.013540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.013565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.013722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.013762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.013912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.013940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.014089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.014116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.014276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.014303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.014438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.014464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.014623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.014651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.014788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.014814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.014976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.015004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.015132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.015157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.015306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.015332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.015520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.015546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.015675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.015702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.015860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.015894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.016029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.016060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.016222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.016248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.016383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.016410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.016540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.016566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.016718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.016744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.016908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.016934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.017066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.017092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.017236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.268 [2024-07-15 16:08:36.017262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.268 qpair failed and we were unable to recover it. 00:27:09.268 [2024-07-15 16:08:36.017401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.017427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.017559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.017585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.017714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.017741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.017903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.017930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.018067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.018092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.018223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.018249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.018405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.018431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.018560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.018587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.018720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.018746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.018881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.018908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.019091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.019118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.019276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.019303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.019468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.019494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.019623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.019650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.019784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.019810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.019942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.019969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.020094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.020120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.020279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.020306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.020540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.020566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.020726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.020753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.020887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.020913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.021075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.021100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.021224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.021250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.021412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.021438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.021599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.021625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.021788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.021814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.022036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.022062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.022188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.022214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.022342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.022369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.022503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.022529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.022664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.022690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.022821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.022847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.022999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.023029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.023285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.023311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.023473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.023499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.023661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.023687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.023817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.023843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.023979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.024005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.024133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.024159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.024318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.024344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.269 [2024-07-15 16:08:36.024504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.269 [2024-07-15 16:08:36.024530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.269 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.024686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.024712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.024852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.024885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.025018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.025044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.025173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.025198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.025384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.025410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.025562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.025589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.025724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.025750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.025913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.025941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.026097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.026123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.026253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.026279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.026435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.026461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.026588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.026616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.026747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.026773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.026937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.026964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.027105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.027131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.027293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.027319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.027479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.027505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.027631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.027656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.027791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.027818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.027956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.027983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.028124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.028152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.028280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.028306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.028458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.028483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.028626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.028652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.028786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.028813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.028958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.028985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.029121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.029147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.029285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.029311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.029449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.029475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.029614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.029640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.029786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.029812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.029949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.029976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.030132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.030159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.030308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.030333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.030503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.030530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.030655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.030682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.030847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.030873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.031017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.031043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.031202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.031228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.031371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.031396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.270 [2024-07-15 16:08:36.031556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.270 [2024-07-15 16:08:36.031582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.270 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.031735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.031761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.031924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.031950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.032071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.032098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.032230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.032255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.032410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.032436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.032574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.032601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.032740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.032766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.032924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.032951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.033076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.033102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.033265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.033291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.033417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.033442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.033609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.033635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.033776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.033801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.033964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.033992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.034157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.034184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.034340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.034366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.034502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.034527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.034685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.034716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.034940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.034967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.035108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.035134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.035276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.035301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.035464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.035490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.035614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.035640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.035765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.035791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.035963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.035989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.036226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.036252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.036412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.036438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.036565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.036590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.036748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.036774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.036992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.037019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.037157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.037183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.037346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.037372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.037532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.037558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.037685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.037711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.271 [2024-07-15 16:08:36.037836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.271 [2024-07-15 16:08:36.037862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.271 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.038031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.038057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.038183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.038208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.038347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.038374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.038507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.038533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.038661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.038689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.038839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.038865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.039036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.039062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.039195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.039222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.039351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.039377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.039513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.039540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.039673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.039701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.039869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.039902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.040027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.040053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.040209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.040236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.040356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.040382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.040543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.040570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.040752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.040794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.040945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.040973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.041137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.041164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.041324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.041349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.041481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.041506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.041664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.041689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.041822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.041854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.042057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.042098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.042266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.042294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.042455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.042480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.042643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.042669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.042838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.042864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.043011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.043037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.043193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.043219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.043354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.043380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.043547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.043573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.043704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.043730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.043861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.043894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.044045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.044071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.044224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.044250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.044390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.044416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.044603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.044629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.044757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.044783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.044924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.044951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.272 [2024-07-15 16:08:36.045078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.272 [2024-07-15 16:08:36.045103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.272 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.045287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.045314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.045468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.045495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.045651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.045677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.045824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.045864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.046008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.046036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.046165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.046191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.046379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.046405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.046533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.046560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.046719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.046758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.046912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.046952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.047150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.047177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.047316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.047342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.047476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.047501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.047631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.047656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.047864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.047910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.048046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.048073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.048216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.048243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.048431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.048458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.048594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.048619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.048744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.048771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.048958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.048989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.049126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.049157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.049286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.049312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.049463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.049489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.049646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.049672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.049832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.049858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.050002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.050028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.050210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.050237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.050398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.050425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.050583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.050609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.050740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.050766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.050928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.050961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.051098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.051124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.051286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.051312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.051524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.051550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.051733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.051759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.051898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.051924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.052059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.052085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.052262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.052290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.052423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.273 [2024-07-15 16:08:36.052450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.273 qpair failed and we were unable to recover it. 00:27:09.273 [2024-07-15 16:08:36.052591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.052618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.052777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.052803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.052940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.052966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.053103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.053129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.053257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.053283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.053430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.053457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.053616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.053642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.053806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.053832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.054014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.054040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.054174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.054200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.054335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.054362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.054526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.054552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.054682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.054708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.054845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.054871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.055041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.055067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.055195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.055222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.055383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.055410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.055551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.055578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.055732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.055758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.055893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.055923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.056046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.056073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.056244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.056275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.056435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.056460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.056619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.056645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.056789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.056816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.056954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.056981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.057117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.057143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.057272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.057298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.057436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.057461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.057648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.057675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.057801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.057827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.058010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.058036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.058201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.058227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.058386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.058411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.058582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.058607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.058755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.058781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.058924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.058950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.059085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.059111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.059285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.059311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.059444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.059470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.059612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.274 [2024-07-15 16:08:36.059637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.274 qpair failed and we were unable to recover it. 00:27:09.274 [2024-07-15 16:08:36.059800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.059826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.059963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.059990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.060124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.060151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.060289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.060314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.060469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.060495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.060628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.060655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.060790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.060816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.060994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.061021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.061177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.061219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.061382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.061418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.061557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.061583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.061740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.061766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.061939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.061966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.062095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.062121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.062297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.062323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.062480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.062506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.062637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.062663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.062787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.062813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.062973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.063013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.063183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.063218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.063349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.063380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.063515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.063541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.063665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.063690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.063831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.063871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.064071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.064099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.064261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.064288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.064418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.064444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.064569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.064594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.064755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.064781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.064956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.064996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.065152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.065180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.065307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.065333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.065487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.065513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.065662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.065688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.065815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.065840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.066039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.066067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.066244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.066284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.066443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.066472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.066606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.275 [2024-07-15 16:08:36.066632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.275 qpair failed and we were unable to recover it. 00:27:09.275 [2024-07-15 16:08:36.066754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.066780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.066912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.066941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.067074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.067100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.067264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.067293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.067427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.067453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.067616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.067642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.067827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.067852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.067992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.068019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.068193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.068222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.068348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.068376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.068532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.068558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.068692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.068719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.068856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.068890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.069025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.069051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.069239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.069265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.069411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.069437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.069588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.069614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.069748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.069777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.069914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.069942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.070076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.070102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.070258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.070284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.070417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.070448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.070608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.070633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.070790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.070815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.071044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.071070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.071201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.071228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.071359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.071385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.071529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.071554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.071694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.071719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.071873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.071918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.072062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.072089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.072257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.072284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.072476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.072502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.072655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.072680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.072815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.072841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.072985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.073013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.073170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.073196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.276 qpair failed and we were unable to recover it. 00:27:09.276 [2024-07-15 16:08:36.073329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.276 [2024-07-15 16:08:36.073354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.073500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.073526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.073661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.073687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.073817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.073843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.074030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.074056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.074198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.074224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.074390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.074416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.074543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.074568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.074751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.074777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.074912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.074940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.075070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.075096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.075266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.075292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.075420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.075446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.075600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.075625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.075784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.075810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.075960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.075986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.076176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.076208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.076340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.076366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.076515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.076541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.076677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.076702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.076887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.076913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.077052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.077079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.077229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.077255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.077395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.077421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.077581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.077612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.077779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.077807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.077944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.077971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.078135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.078161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.078295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.078320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.078460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.078486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.078618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.078644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.078774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.078799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.078951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.078991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.079181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.079209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.079350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.079376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.079546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.079572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.079709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.079736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.079910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.079944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.080075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.080102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.080248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.080274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.080422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.080448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.277 qpair failed and we were unable to recover it. 00:27:09.277 [2024-07-15 16:08:36.080577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.277 [2024-07-15 16:08:36.080603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.080739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.080765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.080901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.080931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.081063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.081089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.081224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.081249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.081414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.081440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.081573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.081600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.081738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.081763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.081925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.081951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.082085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.082111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.082250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.082276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.082445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.082470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.082625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.082650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.082784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.082811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.082956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.082983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.083139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.083165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.083295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.083321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.083481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.083508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.083650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.083675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.083830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.083855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.084006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.084047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.084195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.084235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.084394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.084422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.084587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.084620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.084787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.084814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.084948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.084976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.085145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.085172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.085304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.085331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.085456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.085483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.085622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.085649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.085794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.085833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.085984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.086012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.086144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.086171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.086332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.086357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.086516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.086542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.086705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.086732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.086866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.086899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.087038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.087065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.087249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.087275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.087403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.278 [2024-07-15 16:08:36.087430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.278 qpair failed and we were unable to recover it. 00:27:09.278 [2024-07-15 16:08:36.087566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.087592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.087783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.087808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.087933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.087958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.088117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.088142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.088280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.088305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.088461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.088486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.088622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.088648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.088807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.088833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.088986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.089025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.089194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.089221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.089388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.089415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.089551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.089577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.089714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.089741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.089899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.089930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.090088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.090114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.090234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.090260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.090447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.090473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.090601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.090629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.090760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.090785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.090946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.090972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.091107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.091133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.091296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.091322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.091447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.091473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.091610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.091639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.091793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.091818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.091964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.091990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.092150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.092175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.092306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.092331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.092488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.092514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.092643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.092668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.092809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.092834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.092979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.093005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.093155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.093180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.093329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.093354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.093494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.093519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.093641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.093666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.093803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.093829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.094003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.094029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.094162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.094187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.094331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.094356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.094479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.279 [2024-07-15 16:08:36.094504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.279 qpair failed and we were unable to recover it. 00:27:09.279 [2024-07-15 16:08:36.094634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.094661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.094787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.094813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.094981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.095008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.095134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.095160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.095345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.095370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.095497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.095522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.095661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.095688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.095845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.095870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.096009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.096034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.096171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.096211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.096418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.096445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.096579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.096606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.096765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.096791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.096976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.097017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.097169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.097209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.097377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.097405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.097550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.097576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.097712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.097740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.097923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.097951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.098077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.098103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.098260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.098285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.098412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.098437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.098569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.098599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.098764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.098790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.098950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.098976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.099103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.099129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.099300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.099325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.099462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.099487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.099645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.099671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.099800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.099826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.099988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.100014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.100148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.100173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.100302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.100327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.100458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.100483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.100639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.280 [2024-07-15 16:08:36.100663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.280 qpair failed and we were unable to recover it. 00:27:09.280 [2024-07-15 16:08:36.100795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.100820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.100977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.101004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.101179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.101204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.101361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.101386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.101543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.101568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.101707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.101732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.101858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.101888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.102012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.102038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.102168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.102194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.102337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.102362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.102486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.102510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.102645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.102671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.102802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.102829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.102955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.102980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.103136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.103176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.103371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.103398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.103529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.103554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.103734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.103760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.103928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.103955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.104082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.104108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.104245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.104272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.104448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.104474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.104646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.104672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.104801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.104827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.104987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.105013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.105147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.105174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.105360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.105386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.105534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.105566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.105708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.105736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.105862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.105893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.106069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.106095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.106223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.106248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.106409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.106436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.106575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.106614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.106794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.106834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.107025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.107053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.107187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.107213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.107354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.107382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.107522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.107549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.107712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.107740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.281 [2024-07-15 16:08:36.107886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.281 [2024-07-15 16:08:36.107914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.281 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.108048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.108075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.108199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.108224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.108367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.108393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.108522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.108548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1700000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.108699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.108727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.108917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.108944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.109081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.109107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.109240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.109265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.109433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.109459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.109585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.109611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.109740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.109767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.109966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.110006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.110149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.110175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.110306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.110338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.110467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.110493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.110628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.110653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.110790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.110816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.110943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.110971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.111130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.111156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.111281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.111307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.111462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.111488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.111638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.111664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.111783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.111809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.111966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.111993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.112133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.112172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.112305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.112332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.112478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.112504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.112633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.112658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.112791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.112818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.112976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.113003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.113135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.113161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.113282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.113307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.113437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.113467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.113592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.113619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.113758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.113784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.113932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.113973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.114110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.114137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.114288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.114314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.114439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.114465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.114595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.114621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.114753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.114779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.114921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.114948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.282 [2024-07-15 16:08:36.115103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.282 [2024-07-15 16:08:36.115129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.282 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.115260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.115285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.115427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.115455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.115595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.115621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.115773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.115799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.115942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.115970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.116094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.116120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.116264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.116289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.116415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.116441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.116597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.116622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.116752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.116778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.116905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.116945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.117083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.117110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.117245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.117271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.117430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.117456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.117605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.117631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.117773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.117799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.117944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.117971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.118130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.118155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.118286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.118312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.118443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.118470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.283 [2024-07-15 16:08:36.118617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.118644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.118783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:09.283 [2024-07-15 16:08:36.118810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.118972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.118998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:09.283 [2024-07-15 16:08:36.119128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.283 [2024-07-15 16:08:36.119156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.119295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.119322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.283 [2024-07-15 16:08:36.119454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.119481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.119639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.119665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.119792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.119818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.119944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.119971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.120127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.120153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.120314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.120340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.120479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.120505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.120636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.120662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.120828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.120853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.121038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.121068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.121252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.121291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.121428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.121454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.121587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.121614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.121758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.121784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.121938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.121964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.122138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.122175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.122299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.283 [2024-07-15 16:08:36.122325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.283 qpair failed and we were unable to recover it. 00:27:09.283 [2024-07-15 16:08:36.122510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.122536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.122680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.122707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.122864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.122900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.123040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.123066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.123202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.123228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.123362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.123388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.123522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.123548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.123688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.123715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.123846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.123871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.124010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.124035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.124165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.124190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.124321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.124346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.124478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.124504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.124630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.124655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.124780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.124806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.124961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.124987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.125120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.125146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.125277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.125302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.125438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.125464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.125608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.125634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.125812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.125856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.126009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.126037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.126182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.126210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.126370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.126397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.126530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.126556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.126685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.126713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.126870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.126901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.127051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.127078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.127239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.127264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.127393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.127419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.127604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.127629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.127761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.127786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.127926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.127965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.128099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.128131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.128279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.128306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.128479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.128515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.128646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.128672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.128802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.128829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.284 qpair failed and we were unable to recover it. 00:27:09.284 [2024-07-15 16:08:36.128972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.284 [2024-07-15 16:08:36.128999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.129131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.129157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.129311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.129337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.129506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.129532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.129676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.129702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.129840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.129866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.130019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.130046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.130171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.130201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.130323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.130348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.130478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.130504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.130624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.130649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.130784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.130809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.130965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.130992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.131140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.131166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.131332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.131357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.131490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.131517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.131680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.131707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.131864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.131895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.132070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.132095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.132250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.132283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.132437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.132464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.132585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.132611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.132786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.132811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.132954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.132980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.133153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.133179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.133341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.133366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.133513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.133538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.133661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.133687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.133831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.285 [2024-07-15 16:08:36.133856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.285 qpair failed and we were unable to recover it. 00:27:09.285 [2024-07-15 16:08:36.134000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.134026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.134173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.134198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.134327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.134353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.134509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.134535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.134675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.134714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.134849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.134882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.135041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.135067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.135203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.135234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.135369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.135397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.135552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.135577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.135707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.135733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.135893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.135920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.136077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.136103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.136233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.136260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.136393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.136418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.136562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.136588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.136714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.136740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.136885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.136921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.137063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.137090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.137283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.137309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.137437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.137462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.137600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.137626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.137769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.137796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.137943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.137970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.138131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.138158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.138316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.138342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.138479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.138505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.138637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.138665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.138808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.138834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.138994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.139020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.139150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.139182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.139309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.139334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.139494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.139520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.139658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.139684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.139885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.139933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.286 [2024-07-15 16:08:36.140084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.140115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:09.286 [2024-07-15 16:08:36.140262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.140290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.286 [2024-07-15 16:08:36.140454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.140482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 [2024-07-15 16:08:36.140656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.286 [2024-07-15 16:08:36.140683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.286 qpair failed and we were unable to recover it. 00:27:09.286 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.286 [2024-07-15 16:08:36.140827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.140854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.141002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.141039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.141174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.141200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.141327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.141352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.141483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.141508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.141637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.141662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.141799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.141826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.141979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.142005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.142136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.142161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.142295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.142320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.142477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.142502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.142639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.142664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.142792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.142818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.142957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.142984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.143112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.143138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.143282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.143307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.143462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.143488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.143611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.143636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.143774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.143799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.287 [2024-07-15 16:08:36.143947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.287 [2024-07-15 16:08:36.143973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.287 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.144099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.144129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.144253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.144279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.144412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.144438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.144566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.144591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.144716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.144741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.144901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.144936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.145092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.145118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.145252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.145278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.145437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.145462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.145632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.145657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.145820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.145845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.146021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.146047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.146199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.146224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.146383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.146408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.146541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.146566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.146699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.146724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.146859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.146890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.147027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.147053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.147225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.147251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.147421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.147446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.147578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.147605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.147763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.147788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.147975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.148001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.148131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.148156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.148280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.148305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.148466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.148491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.148620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.148645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.148801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.148830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.148966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.148993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.149154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.149180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.149301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.149326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.149447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.149472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.149626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.149651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.554 [2024-07-15 16:08:36.149812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.554 [2024-07-15 16:08:36.149837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.554 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.150067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.150093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.150223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.150248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.150408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.150435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.150574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.150599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.150723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.150748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.150886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.150912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.151057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.151082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.151253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.151293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.151439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.151468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.151625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.151652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.151812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.151838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.151977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.152004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.152136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.152162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.152397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.152423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.152583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.152608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.152731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.152757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.152897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.152923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.153068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.153094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.153248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.153274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.153431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.153456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.153584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.153618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.153746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.153771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.153911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.153936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.154106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.154131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.154271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.154296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.154427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.154452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.154578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.154604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.154738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.154763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.154920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.154949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.155108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.155134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.155270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.155295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.155459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.155485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.155615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.155641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.155780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.155806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.155958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.155998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.156139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.156167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.156310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.156336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.156492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.156518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.156653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.156679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.156802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.156828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.555 [2024-07-15 16:08:36.156964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.555 [2024-07-15 16:08:36.157002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.555 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.157144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.157169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.157385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.157411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.157540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.157565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.157725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.157750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.157887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.157913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.158085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.158110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.158265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.158290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.158425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.158450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.158591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.158617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.158797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.158821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.158953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.158979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.159113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.159140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.159294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.159321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.159479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.159504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.159741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.159766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.159945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.159971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.160104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.160130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.160281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.160306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.160437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.160462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.160621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.160646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.160797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.160837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.161018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.161046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.161181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.161207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.161354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.161381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.161559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.161586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.161747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.161774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.161985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.162011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.162146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.162171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.162303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.162328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.162488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.162514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.162664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.162689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.162828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.162853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.163004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.163031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.163247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.163273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.163411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.163436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.163573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.163598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.163739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.163764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.163901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.163927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.164063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.164088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.164240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.164265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.556 qpair failed and we were unable to recover it. 00:27:09.556 [2024-07-15 16:08:36.164393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.556 [2024-07-15 16:08:36.164418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.164552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.164578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.164738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.164763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.164908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.164935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.165072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.165097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.165252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.165277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.165406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.165432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.165568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.165593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.165720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.165745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.165872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.165903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.166060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.166085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.166211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.166236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.166367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.166393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.166517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.166542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.166685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.166711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.166841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.166866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.167034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.167059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 Malloc0 00:27:09.557 [2024-07-15 16:08:36.167200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.167227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.167367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.557 [2024-07-15 16:08:36.167393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.167539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:09.557 [2024-07-15 16:08:36.167565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.557 [2024-07-15 16:08:36.167692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.167718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.557 [2024-07-15 16:08:36.167869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.167901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.168066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.168092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.168232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.168257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.168386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.168412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.168551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.168576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.168734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.168760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.168894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.168921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.169074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.169099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.169228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.169254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.169408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.169433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.169554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.169579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.169708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.169734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.169866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.169910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.170048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.170074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.170214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.557 [2024-07-15 16:08:36.170240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.557 qpair failed and we were unable to recover it. 00:27:09.557 [2024-07-15 16:08:36.170380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.170405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.170539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.170566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.170733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.170759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.170814] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.558 [2024-07-15 16:08:36.170895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.170920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.171048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.171073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.171233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.171258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.171384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.171409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.171562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.171587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.171772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.171797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.171939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.171965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.172105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.172131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.172257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.172282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.172454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.172479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.172603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.172629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.172756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.172781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.172922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.172949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.173101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.173126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.173277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.173302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.173478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.173503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.173689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.173714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.173859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.173889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.174026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.174052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.174178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.174203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.174330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.174359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.174496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.174521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.174692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.174717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.174860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.174889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.175036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.175061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.175224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.175249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.175405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.175431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.175588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.175613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.175743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.175768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.558 qpair failed and we were unable to recover it. 00:27:09.558 [2024-07-15 16:08:36.175940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.558 [2024-07-15 16:08:36.175966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.176128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.176153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.176280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.176305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.176431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.176456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.176583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.176608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.176780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.176820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.176959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.176988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.177137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.177165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.177325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.177351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.177481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.177508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.177652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.177678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.177860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.177902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.178036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.178062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.178191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.178217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.178376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.178401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.178528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.178553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.178682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.178706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.178851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.178899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.179055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.179091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.559 [2024-07-15 16:08:36.179246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.179274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:09.559 [2024-07-15 16:08:36.179413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.179440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.559 [2024-07-15 16:08:36.179567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.179594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.559 [2024-07-15 16:08:36.179732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.179760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.179899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.179927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.180058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.180084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.180221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.180248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.180402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.180428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.180561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.180589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.180730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.180757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.180887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.180914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.181054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.181080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.181206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.181232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.181356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.181381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.181567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.181593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.181763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.181789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.181928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.559 [2024-07-15 16:08:36.181955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.559 qpair failed and we were unable to recover it. 00:27:09.559 [2024-07-15 16:08:36.182120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.182147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.182319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.182345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.182474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.182499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.182670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.182696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.182827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.182853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.182992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.183019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.183171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.183197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.183341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.183371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.183558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.183584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.183715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.183741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.183888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.183914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.184056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.184082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.184239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.184265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.184424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.184450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.184607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.184632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.184795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.184821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.184992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.185019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.185143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.185169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.185327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.185352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.185540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.185566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.185702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.185728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.185859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.185892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.186030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.186056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.186211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.186238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.186365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.186391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.186520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.186547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.186722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.186763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.186923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.186962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.560 [2024-07-15 16:08:36.187108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.187136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:09.560 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.560 [2024-07-15 16:08:36.187290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.187317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.560 [2024-07-15 16:08:36.187453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.187481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.187611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.187637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.187786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.187816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.187954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.187981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.188119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.188144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.188271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.188297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.188421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.188446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.188580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.188606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.188752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.188791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.560 [2024-07-15 16:08:36.188930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.560 [2024-07-15 16:08:36.188959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.560 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.189091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.189117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.189244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.189270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.189457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.189483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.189640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.189666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.189834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.189860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.190025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.190051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.190215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.190241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.190387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.190414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.190550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.190578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.190705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.190730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.190897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.190926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.191105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.191132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.191268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.191293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.191453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.191479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.191608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.191633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.191815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.191841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.191982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.192008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.192146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.192172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.192333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.192360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.192497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.192525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.192684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.192711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.192862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.192902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.193027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.193053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.193239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.193266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.193439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.193465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.193626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.193652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.193840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.193866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.194024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.194050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.194210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.194236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.194370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.194396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.194539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.194564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.194722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.194748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.194890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.194921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 [2024-07-15 16:08:36.195055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.561 [2024-07-15 16:08:36.195082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.561 [2024-07-15 16:08:36.195246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.195273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.561 qpair failed and we were unable to recover it. 00:27:09.561 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.561 [2024-07-15 16:08:36.195395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.561 [2024-07-15 16:08:36.195421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.562 [2024-07-15 16:08:36.195550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.195576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.195723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.195750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.195892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.195929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.196114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.196140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.196284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.196311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.196442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.196468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f16f8000b90 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.196622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.196661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.196800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.196827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1708000b90 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.197021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.197059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.197193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.197220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.197357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.197382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.197538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.197563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.197688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.197713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.197837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.197863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.198048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.198073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.198200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.198225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.198363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.198388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.198545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.198570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.198729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.198754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.198884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.562 [2024-07-15 16:08:36.198910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e6200 with addr=10.0.0.2, port=4420 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.199063] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.562 [2024-07-15 16:08:36.201616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.562 [2024-07-15 16:08:36.201785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.562 [2024-07-15 16:08:36.201817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.562 [2024-07-15 16:08:36.201842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.562 [2024-07-15 16:08:36.201855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.562 [2024-07-15 16:08:36.201896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.562 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:09.562 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.562 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.562 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.562 16:08:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1262734 00:27:09.562 [2024-07-15 16:08:36.211408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.562 [2024-07-15 16:08:36.211546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.562 [2024-07-15 16:08:36.211574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.562 [2024-07-15 16:08:36.211588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.562 [2024-07-15 16:08:36.211601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.562 [2024-07-15 16:08:36.211630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.221423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.562 [2024-07-15 16:08:36.221564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.562 [2024-07-15 16:08:36.221591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.562 [2024-07-15 16:08:36.221606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.562 [2024-07-15 16:08:36.221619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.562 [2024-07-15 16:08:36.221647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.231424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.562 [2024-07-15 16:08:36.231564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.562 [2024-07-15 16:08:36.231590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.562 [2024-07-15 16:08:36.231604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.562 [2024-07-15 16:08:36.231617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.562 [2024-07-15 16:08:36.231645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.241419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.562 [2024-07-15 16:08:36.241551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.562 [2024-07-15 16:08:36.241577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.562 [2024-07-15 16:08:36.241591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.562 [2024-07-15 16:08:36.241604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.562 [2024-07-15 16:08:36.241632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.562 qpair failed and we were unable to recover it. 00:27:09.562 [2024-07-15 16:08:36.251429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.562 [2024-07-15 16:08:36.251565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.562 [2024-07-15 16:08:36.251590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.563 [2024-07-15 16:08:36.251605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.563 [2024-07-15 16:08:36.251618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.563 [2024-07-15 16:08:36.251646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.563 qpair failed and we were unable to recover it. 00:27:09.563 [2024-07-15 16:08:36.261566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.563 [2024-07-15 16:08:36.261725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.563 [2024-07-15 16:08:36.261751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.563 [2024-07-15 16:08:36.261765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.563 [2024-07-15 16:08:36.261777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.563 [2024-07-15 16:08:36.261805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.563 qpair failed and we were unable to recover it. 00:27:09.563 [2024-07-15 16:08:36.271460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.563 [2024-07-15 16:08:36.271594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.563 [2024-07-15 16:08:36.271620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.563 [2024-07-15 16:08:36.271634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.563 [2024-07-15 16:08:36.271647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.563 [2024-07-15 16:08:36.271675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.563 qpair failed and we were unable to recover it. 00:27:09.563 [2024-07-15 16:08:36.281499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.563 [2024-07-15 16:08:36.281632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.563 [2024-07-15 16:08:36.281658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.563 [2024-07-15 16:08:36.281679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.563 [2024-07-15 16:08:36.281693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.563 [2024-07-15 16:08:36.281723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.563 qpair failed and we were unable to recover it. 00:27:09.563 [2024-07-15 16:08:36.291522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.563 [2024-07-15 16:08:36.291654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.563 [2024-07-15 16:08:36.291680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.563 [2024-07-15 16:08:36.291694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.563 [2024-07-15 16:08:36.291707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.563 [2024-07-15 16:08:36.291734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.563 qpair failed and we were unable to recover it. 00:27:09.563 [2024-07-15 16:08:36.301572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.563 [2024-07-15 16:08:36.301711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.563 [2024-07-15 16:08:36.301736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.563 [2024-07-15 16:08:36.301750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.563 [2024-07-15 16:08:36.301763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.563 [2024-07-15 16:08:36.301791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.563 qpair failed and we were unable to recover it. 00:27:09.563 [2024-07-15 16:08:36.311615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.563 [2024-07-15 16:08:36.311749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.563 [2024-07-15 16:08:36.311774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.563 [2024-07-15 16:08:36.311788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.563 [2024-07-15 16:08:36.311802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.563 [2024-07-15 16:08:36.311829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.563 qpair failed and we were unable to recover it. 00:27:09.563 [2024-07-15 16:08:36.321639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.563 [2024-07-15 16:08:36.321774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.563 [2024-07-15 16:08:36.321800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.563 [2024-07-15 16:08:36.321814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.563 [2024-07-15 16:08:36.321827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.563 [2024-07-15 16:08:36.321854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.563 qpair failed and we were unable to recover it. 00:27:09.563 [2024-07-15 16:08:36.331657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.563 [2024-07-15 16:08:36.331790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.563 [2024-07-15 16:08:36.331816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.563 [2024-07-15 16:08:36.331830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.563 [2024-07-15 16:08:36.331843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.563 [2024-07-15 16:08:36.331870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.563 qpair failed and we were unable to recover it. 00:27:09.563 [2024-07-15 16:08:36.341679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.563 [2024-07-15 16:08:36.341807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.563 [2024-07-15 16:08:36.341832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.563 [2024-07-15 16:08:36.341846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.563 [2024-07-15 16:08:36.341860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.563 [2024-07-15 16:08:36.341895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.563 qpair failed and we were unable to recover it. 00:27:09.563 [2024-07-15 16:08:36.351684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.563 [2024-07-15 16:08:36.351818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.563 [2024-07-15 16:08:36.351843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.563 [2024-07-15 16:08:36.351858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.563 [2024-07-15 16:08:36.351870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.563 [2024-07-15 16:08:36.351906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.563 qpair failed and we were unable to recover it. 00:27:09.563 [2024-07-15 16:08:36.361731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.563 [2024-07-15 16:08:36.361870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.563 [2024-07-15 16:08:36.361902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.563 [2024-07-15 16:08:36.361917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.563 [2024-07-15 16:08:36.361930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.563 [2024-07-15 16:08:36.361958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.564 qpair failed and we were unable to recover it. 00:27:09.564 [2024-07-15 16:08:36.371763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.564 [2024-07-15 16:08:36.371900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.564 [2024-07-15 16:08:36.371926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.564 [2024-07-15 16:08:36.371946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.564 [2024-07-15 16:08:36.371960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.564 [2024-07-15 16:08:36.371988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.564 qpair failed and we were unable to recover it. 00:27:09.564 [2024-07-15 16:08:36.381808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.564 [2024-07-15 16:08:36.381949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.564 [2024-07-15 16:08:36.381976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.564 [2024-07-15 16:08:36.381991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.564 [2024-07-15 16:08:36.382004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.564 [2024-07-15 16:08:36.382032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.564 qpair failed and we were unable to recover it. 00:27:09.564 [2024-07-15 16:08:36.391803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.564 [2024-07-15 16:08:36.391949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.564 [2024-07-15 16:08:36.391975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.564 [2024-07-15 16:08:36.391989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.564 [2024-07-15 16:08:36.392002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.564 [2024-07-15 16:08:36.392030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.564 qpair failed and we were unable to recover it. 00:27:09.564 [2024-07-15 16:08:36.401857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.564 [2024-07-15 16:08:36.401996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.564 [2024-07-15 16:08:36.402022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.564 [2024-07-15 16:08:36.402036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.564 [2024-07-15 16:08:36.402049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.564 [2024-07-15 16:08:36.402076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.564 qpair failed and we were unable to recover it. 00:27:09.564 [2024-07-15 16:08:36.411874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.564 [2024-07-15 16:08:36.412012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.564 [2024-07-15 16:08:36.412037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.564 [2024-07-15 16:08:36.412051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.564 [2024-07-15 16:08:36.412064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.564 [2024-07-15 16:08:36.412092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.564 qpair failed and we were unable to recover it. 00:27:09.564 [2024-07-15 16:08:36.421917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.564 [2024-07-15 16:08:36.422043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.564 [2024-07-15 16:08:36.422078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.564 [2024-07-15 16:08:36.422092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.564 [2024-07-15 16:08:36.422105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.564 [2024-07-15 16:08:36.422133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.564 qpair failed and we were unable to recover it. 00:27:09.564 [2024-07-15 16:08:36.431919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.564 [2024-07-15 16:08:36.432053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.564 [2024-07-15 16:08:36.432078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.564 [2024-07-15 16:08:36.432092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.564 [2024-07-15 16:08:36.432105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.564 [2024-07-15 16:08:36.432133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.564 qpair failed and we were unable to recover it. 00:27:09.564 [2024-07-15 16:08:36.441989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.564 [2024-07-15 16:08:36.442127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.564 [2024-07-15 16:08:36.442153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.564 [2024-07-15 16:08:36.442167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.564 [2024-07-15 16:08:36.442180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.564 [2024-07-15 16:08:36.442208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.564 qpair failed and we were unable to recover it. 00:27:09.564 [2024-07-15 16:08:36.451992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.564 [2024-07-15 16:08:36.452126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.564 [2024-07-15 16:08:36.452152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.564 [2024-07-15 16:08:36.452166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.564 [2024-07-15 16:08:36.452179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.564 [2024-07-15 16:08:36.452208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.564 qpair failed and we were unable to recover it. 00:27:09.564 [2024-07-15 16:08:36.462092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.564 [2024-07-15 16:08:36.462268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.564 [2024-07-15 16:08:36.462299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.564 [2024-07-15 16:08:36.462315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.564 [2024-07-15 16:08:36.462328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.564 [2024-07-15 16:08:36.462357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.564 qpair failed and we were unable to recover it. 00:27:09.564 [2024-07-15 16:08:36.472142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.564 [2024-07-15 16:08:36.472282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.564 [2024-07-15 16:08:36.472307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.564 [2024-07-15 16:08:36.472321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.564 [2024-07-15 16:08:36.472335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.564 [2024-07-15 16:08:36.472363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.564 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.482255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.482390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.824 [2024-07-15 16:08:36.482416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.824 [2024-07-15 16:08:36.482431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.824 [2024-07-15 16:08:36.482444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.824 [2024-07-15 16:08:36.482471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.824 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.492145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.492282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.824 [2024-07-15 16:08:36.492308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.824 [2024-07-15 16:08:36.492322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.824 [2024-07-15 16:08:36.492335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.824 [2024-07-15 16:08:36.492363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.824 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.502139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.502271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.824 [2024-07-15 16:08:36.502297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.824 [2024-07-15 16:08:36.502311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.824 [2024-07-15 16:08:36.502324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.824 [2024-07-15 16:08:36.502352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.824 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.512194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.512333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.824 [2024-07-15 16:08:36.512358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.824 [2024-07-15 16:08:36.512372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.824 [2024-07-15 16:08:36.512385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.824 [2024-07-15 16:08:36.512412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.824 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.522206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.522353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.824 [2024-07-15 16:08:36.522378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.824 [2024-07-15 16:08:36.522392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.824 [2024-07-15 16:08:36.522405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.824 [2024-07-15 16:08:36.522432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.824 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.532232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.532363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.824 [2024-07-15 16:08:36.532388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.824 [2024-07-15 16:08:36.532402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.824 [2024-07-15 16:08:36.532415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.824 [2024-07-15 16:08:36.532442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.824 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.542303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.542453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.824 [2024-07-15 16:08:36.542479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.824 [2024-07-15 16:08:36.542493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.824 [2024-07-15 16:08:36.542506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.824 [2024-07-15 16:08:36.542536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.824 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.552311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.552492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.824 [2024-07-15 16:08:36.552521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.824 [2024-07-15 16:08:36.552537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.824 [2024-07-15 16:08:36.552550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.824 [2024-07-15 16:08:36.552578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.824 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.562394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.562528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.824 [2024-07-15 16:08:36.562554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.824 [2024-07-15 16:08:36.562569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.824 [2024-07-15 16:08:36.562581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.824 [2024-07-15 16:08:36.562609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.824 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.572369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.572547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.824 [2024-07-15 16:08:36.572572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.824 [2024-07-15 16:08:36.572587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.824 [2024-07-15 16:08:36.572600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.824 [2024-07-15 16:08:36.572627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.824 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.582469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.582602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.824 [2024-07-15 16:08:36.582628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.824 [2024-07-15 16:08:36.582642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.824 [2024-07-15 16:08:36.582655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.824 [2024-07-15 16:08:36.582683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.824 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.592414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.592589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.824 [2024-07-15 16:08:36.592614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.824 [2024-07-15 16:08:36.592627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.824 [2024-07-15 16:08:36.592640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.824 [2024-07-15 16:08:36.592674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.824 qpair failed and we were unable to recover it. 00:27:09.824 [2024-07-15 16:08:36.602460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.824 [2024-07-15 16:08:36.602599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.602624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.602639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.602651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.602679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.612454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.612579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.612604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.612618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.612631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.612658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.622467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.622598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.622624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.622638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.622651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.622678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.632591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.632746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.632772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.632786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.632799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.632827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.642545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.642682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.642713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.642728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.642741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.642769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.652638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.652769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.652794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.652808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.652822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.652849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.662604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.662732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.662757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.662771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.662784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.662812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.672613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.672761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.672786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.672800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.672812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.672841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.682711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.682868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.682903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.682918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.682931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.682965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.692659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.692789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.692814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.692828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.692841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.692870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.702772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.702906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.702931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.702946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.702959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.702986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.712742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.712882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.712908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.712922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.712935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.712963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.722772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.722902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.722927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.722941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.722955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.722982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.732781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.732925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.732955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.825 [2024-07-15 16:08:36.732970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.825 [2024-07-15 16:08:36.732983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.825 [2024-07-15 16:08:36.733011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.825 qpair failed and we were unable to recover it. 00:27:09.825 [2024-07-15 16:08:36.742801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.825 [2024-07-15 16:08:36.742942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.825 [2024-07-15 16:08:36.742967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.826 [2024-07-15 16:08:36.742981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.826 [2024-07-15 16:08:36.742993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.826 [2024-07-15 16:08:36.743020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.826 qpair failed and we were unable to recover it. 00:27:09.826 [2024-07-15 16:08:36.752867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.826 [2024-07-15 16:08:36.753060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.826 [2024-07-15 16:08:36.753085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.826 [2024-07-15 16:08:36.753099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.826 [2024-07-15 16:08:36.753112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:09.826 [2024-07-15 16:08:36.753140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.826 qpair failed and we were unable to recover it. 00:27:10.084 [2024-07-15 16:08:36.762856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.084 [2024-07-15 16:08:36.763012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.084 [2024-07-15 16:08:36.763037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.084 [2024-07-15 16:08:36.763052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.084 [2024-07-15 16:08:36.763065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.084 [2024-07-15 16:08:36.763093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.084 qpair failed and we were unable to recover it. 00:27:10.084 [2024-07-15 16:08:36.772988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.084 [2024-07-15 16:08:36.773132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.084 [2024-07-15 16:08:36.773157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.084 [2024-07-15 16:08:36.773171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.084 [2024-07-15 16:08:36.773190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.084 [2024-07-15 16:08:36.773219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.084 qpair failed and we were unable to recover it. 00:27:10.084 [2024-07-15 16:08:36.782990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.084 [2024-07-15 16:08:36.783135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.084 [2024-07-15 16:08:36.783159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.084 [2024-07-15 16:08:36.783173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.084 [2024-07-15 16:08:36.783186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.084 [2024-07-15 16:08:36.783214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.084 qpair failed and we were unable to recover it. 00:27:10.084 [2024-07-15 16:08:36.792961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.084 [2024-07-15 16:08:36.793119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.084 [2024-07-15 16:08:36.793143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.084 [2024-07-15 16:08:36.793157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.084 [2024-07-15 16:08:36.793168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.084 [2024-07-15 16:08:36.793196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.084 qpair failed and we were unable to recover it. 00:27:10.084 [2024-07-15 16:08:36.803023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.084 [2024-07-15 16:08:36.803165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.084 [2024-07-15 16:08:36.803190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.084 [2024-07-15 16:08:36.803204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.084 [2024-07-15 16:08:36.803217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.084 [2024-07-15 16:08:36.803244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.084 qpair failed and we were unable to recover it. 00:27:10.084 [2024-07-15 16:08:36.812991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.084 [2024-07-15 16:08:36.813123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.084 [2024-07-15 16:08:36.813148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.084 [2024-07-15 16:08:36.813162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.084 [2024-07-15 16:08:36.813175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.084 [2024-07-15 16:08:36.813203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.084 qpair failed and we were unable to recover it. 00:27:10.084 [2024-07-15 16:08:36.823054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.084 [2024-07-15 16:08:36.823201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.084 [2024-07-15 16:08:36.823227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.084 [2024-07-15 16:08:36.823242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.084 [2024-07-15 16:08:36.823255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.084 [2024-07-15 16:08:36.823282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.084 qpair failed and we were unable to recover it. 00:27:10.084 [2024-07-15 16:08:36.833106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.084 [2024-07-15 16:08:36.833284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.084 [2024-07-15 16:08:36.833309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.084 [2024-07-15 16:08:36.833323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.084 [2024-07-15 16:08:36.833336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.084 [2024-07-15 16:08:36.833364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.084 qpair failed and we were unable to recover it. 00:27:10.084 [2024-07-15 16:08:36.843081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.084 [2024-07-15 16:08:36.843219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.084 [2024-07-15 16:08:36.843245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.084 [2024-07-15 16:08:36.843259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.084 [2024-07-15 16:08:36.843272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.084 [2024-07-15 16:08:36.843300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.084 qpair failed and we were unable to recover it. 00:27:10.084 [2024-07-15 16:08:36.853118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.084 [2024-07-15 16:08:36.853249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.084 [2024-07-15 16:08:36.853274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.084 [2024-07-15 16:08:36.853288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.084 [2024-07-15 16:08:36.853302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.084 [2024-07-15 16:08:36.853329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.084 qpair failed and we were unable to recover it. 00:27:10.084 [2024-07-15 16:08:36.863143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.084 [2024-07-15 16:08:36.863273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.084 [2024-07-15 16:08:36.863299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.084 [2024-07-15 16:08:36.863313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.084 [2024-07-15 16:08:36.863332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.084 [2024-07-15 16:08:36.863360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.084 qpair failed and we were unable to recover it. 00:27:10.084 [2024-07-15 16:08:36.873199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.873388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.873417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.873433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.873447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.873476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:36.883262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.883392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.883418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.883432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.883445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.883474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:36.893229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.893367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.893392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.893406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.893419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.893447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:36.903283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.903458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.903483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.903497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.903510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.903538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:36.913277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.913413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.913438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.913453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.913466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.913493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:36.923386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.923572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.923600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.923615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.923628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.923656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:36.933356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.933498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.933524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.933538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.933551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.933579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:36.943363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.943523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.943549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.943564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.943577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.943604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:36.953508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.953653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.953678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.953692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.953710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.953739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:36.963412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.963542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.963568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.963582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.963595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.963622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:36.973451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.973581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.973606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.973620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.973633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.973663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:36.983479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.983616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.983643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.983657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.983670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.983699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:36.993633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:36.993828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:36.993853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:36.993867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:36.993889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:36.993919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:37.003627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:37.003768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:37.003794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:37.003808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:37.003821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:37.003849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.085 [2024-07-15 16:08:37.013625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.085 [2024-07-15 16:08:37.013760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.085 [2024-07-15 16:08:37.013786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.085 [2024-07-15 16:08:37.013800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.085 [2024-07-15 16:08:37.013813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.085 [2024-07-15 16:08:37.013841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.085 qpair failed and we were unable to recover it. 00:27:10.343 [2024-07-15 16:08:37.023719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.343 [2024-07-15 16:08:37.023852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.343 [2024-07-15 16:08:37.023884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.343 [2024-07-15 16:08:37.023901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.343 [2024-07-15 16:08:37.023914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.343 [2024-07-15 16:08:37.023942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.343 qpair failed and we were unable to recover it. 00:27:10.343 [2024-07-15 16:08:37.033642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.343 [2024-07-15 16:08:37.033783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.343 [2024-07-15 16:08:37.033808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.343 [2024-07-15 16:08:37.033823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.343 [2024-07-15 16:08:37.033836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.343 [2024-07-15 16:08:37.033864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.343 qpair failed and we were unable to recover it. 00:27:10.343 [2024-07-15 16:08:37.043651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.343 [2024-07-15 16:08:37.043786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.343 [2024-07-15 16:08:37.043813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.343 [2024-07-15 16:08:37.043833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.343 [2024-07-15 16:08:37.043847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.343 [2024-07-15 16:08:37.043883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.343 qpair failed and we were unable to recover it. 00:27:10.343 [2024-07-15 16:08:37.053688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.343 [2024-07-15 16:08:37.053836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.343 [2024-07-15 16:08:37.053861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.343 [2024-07-15 16:08:37.053882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.343 [2024-07-15 16:08:37.053896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.343 [2024-07-15 16:08:37.053924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.343 qpair failed and we were unable to recover it. 00:27:10.343 [2024-07-15 16:08:37.063722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.343 [2024-07-15 16:08:37.063853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.343 [2024-07-15 16:08:37.063885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.343 [2024-07-15 16:08:37.063901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.343 [2024-07-15 16:08:37.063915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.343 [2024-07-15 16:08:37.063943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.343 qpair failed and we were unable to recover it. 00:27:10.343 [2024-07-15 16:08:37.073751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.343 [2024-07-15 16:08:37.073894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.343 [2024-07-15 16:08:37.073920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.343 [2024-07-15 16:08:37.073934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.343 [2024-07-15 16:08:37.073947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.073975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.083767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.083914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.083940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.083954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.083967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.083995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.093846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.094021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.094047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.094061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.094074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.094102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.103846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.104027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.104053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.104067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.104080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.104108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.113873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.114016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.114040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.114054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.114067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.114095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.123915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.124047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.124073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.124087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.124100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.124127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.133945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.134123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.134148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.134168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.134183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.134213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.143944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.144079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.144104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.144119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.144131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.144159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.153990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.154136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.154161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.154175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.154188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.154215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.164002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.164175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.164200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.164215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.164228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.164256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.174027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.174157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.174182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.174196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.174209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.174236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.184107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.184244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.184270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.184284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.184297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.184325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.194110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.194254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.194279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.194293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.194306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.194334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.204123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.204260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.204285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.204300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.204313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.204341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.214126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.214260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.214285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.214299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.214313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.214341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.224188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.224332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.224363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.224378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.224391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.224418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.234208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.234342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.234366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.234380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.234393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.234421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.244217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.244352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.244377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.244392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.244405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.244433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.254311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.254442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.254467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.254482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.254495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.254524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.344 [2024-07-15 16:08:37.264255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.344 [2024-07-15 16:08:37.264389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.344 [2024-07-15 16:08:37.264414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.344 [2024-07-15 16:08:37.264429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.344 [2024-07-15 16:08:37.264442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.344 [2024-07-15 16:08:37.264470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.344 qpair failed and we were unable to recover it. 00:27:10.603 [2024-07-15 16:08:37.274321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.603 [2024-07-15 16:08:37.274465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.603 [2024-07-15 16:08:37.274490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.603 [2024-07-15 16:08:37.274504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.603 [2024-07-15 16:08:37.274517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.603 [2024-07-15 16:08:37.274545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.603 qpair failed and we were unable to recover it. 00:27:10.603 [2024-07-15 16:08:37.284420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.603 [2024-07-15 16:08:37.284609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.603 [2024-07-15 16:08:37.284638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.603 [2024-07-15 16:08:37.284654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.603 [2024-07-15 16:08:37.284667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.284696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.294430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.294566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.294592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.294607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.294620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.294648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.304383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.304514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.304539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.304554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.304567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.304595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.314421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.314561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.314591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.314607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.314620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.314648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.324455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.324637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.324663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.324677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.324690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.324718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.334484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.334623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.334649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.334663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.334676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.334704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.344479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.344615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.344640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.344655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.344668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.344695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.354542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.354679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.354703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.354717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.354730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.354763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.364562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.364741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.364766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.364781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.364794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.364821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.374638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.374793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.374819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.374833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.374846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.374874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.384607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.384743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.384769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.384783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.384796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.384823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.394709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.394863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.394897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.394913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.394926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.394954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.404673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.404812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.404842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.404857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.404870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.404905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.414717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.414853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.414886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.414903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.414916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.414944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.424715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.604 [2024-07-15 16:08:37.424838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.604 [2024-07-15 16:08:37.424864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.604 [2024-07-15 16:08:37.424884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.604 [2024-07-15 16:08:37.424899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.604 [2024-07-15 16:08:37.424927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.604 qpair failed and we were unable to recover it. 00:27:10.604 [2024-07-15 16:08:37.434801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.605 [2024-07-15 16:08:37.434942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.605 [2024-07-15 16:08:37.434967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.605 [2024-07-15 16:08:37.434981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.605 [2024-07-15 16:08:37.434994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.605 [2024-07-15 16:08:37.435021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.605 qpair failed and we were unable to recover it. 00:27:10.605 [2024-07-15 16:08:37.444778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.605 [2024-07-15 16:08:37.444917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.605 [2024-07-15 16:08:37.444942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.605 [2024-07-15 16:08:37.444956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.605 [2024-07-15 16:08:37.444969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.605 [2024-07-15 16:08:37.445002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.605 qpair failed and we were unable to recover it. 00:27:10.605 [2024-07-15 16:08:37.454899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.605 [2024-07-15 16:08:37.455033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.605 [2024-07-15 16:08:37.455058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.605 [2024-07-15 16:08:37.455072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.605 [2024-07-15 16:08:37.455085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.605 [2024-07-15 16:08:37.455113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.605 qpair failed and we were unable to recover it. 00:27:10.605 [2024-07-15 16:08:37.464847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.605 [2024-07-15 16:08:37.464985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.605 [2024-07-15 16:08:37.465011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.605 [2024-07-15 16:08:37.465025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.605 [2024-07-15 16:08:37.465038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.605 [2024-07-15 16:08:37.465066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.605 qpair failed and we were unable to recover it. 00:27:10.605 [2024-07-15 16:08:37.474852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.605 [2024-07-15 16:08:37.474996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.605 [2024-07-15 16:08:37.475021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.605 [2024-07-15 16:08:37.475036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.605 [2024-07-15 16:08:37.475049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.605 [2024-07-15 16:08:37.475077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.605 qpair failed and we were unable to recover it. 00:27:10.605 [2024-07-15 16:08:37.484869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.605 [2024-07-15 16:08:37.485009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.605 [2024-07-15 16:08:37.485035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.605 [2024-07-15 16:08:37.485049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.605 [2024-07-15 16:08:37.485062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.605 [2024-07-15 16:08:37.485089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.605 qpair failed and we were unable to recover it. 00:27:10.605 [2024-07-15 16:08:37.494917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.605 [2024-07-15 16:08:37.495049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.605 [2024-07-15 16:08:37.495080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.605 [2024-07-15 16:08:37.495095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.605 [2024-07-15 16:08:37.495108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.605 [2024-07-15 16:08:37.495137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.605 qpair failed and we were unable to recover it. 00:27:10.605 [2024-07-15 16:08:37.504932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.605 [2024-07-15 16:08:37.505067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.605 [2024-07-15 16:08:37.505093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.605 [2024-07-15 16:08:37.505107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.605 [2024-07-15 16:08:37.505120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.605 [2024-07-15 16:08:37.505149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.605 qpair failed and we were unable to recover it. 00:27:10.605 [2024-07-15 16:08:37.515016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.605 [2024-07-15 16:08:37.515169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.605 [2024-07-15 16:08:37.515195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.605 [2024-07-15 16:08:37.515209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.605 [2024-07-15 16:08:37.515222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.605 [2024-07-15 16:08:37.515250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.605 qpair failed and we were unable to recover it. 00:27:10.605 [2024-07-15 16:08:37.525002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.605 [2024-07-15 16:08:37.525145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.605 [2024-07-15 16:08:37.525171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.605 [2024-07-15 16:08:37.525185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.605 [2024-07-15 16:08:37.525199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.605 [2024-07-15 16:08:37.525226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.605 qpair failed and we were unable to recover it. 00:27:10.865 [2024-07-15 16:08:37.535116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.865 [2024-07-15 16:08:37.535271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.865 [2024-07-15 16:08:37.535296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.865 [2024-07-15 16:08:37.535311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.865 [2024-07-15 16:08:37.535329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.865 [2024-07-15 16:08:37.535357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.865 qpair failed and we were unable to recover it. 00:27:10.865 [2024-07-15 16:08:37.545057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.865 [2024-07-15 16:08:37.545192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.865 [2024-07-15 16:08:37.545217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.865 [2024-07-15 16:08:37.545232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.865 [2024-07-15 16:08:37.545245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.865 [2024-07-15 16:08:37.545273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.865 qpair failed and we were unable to recover it. 00:27:10.865 [2024-07-15 16:08:37.555154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.865 [2024-07-15 16:08:37.555313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.865 [2024-07-15 16:08:37.555338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.865 [2024-07-15 16:08:37.555352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.865 [2024-07-15 16:08:37.555365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.865 [2024-07-15 16:08:37.555393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.865 qpair failed and we were unable to recover it. 00:27:10.865 [2024-07-15 16:08:37.565128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.865 [2024-07-15 16:08:37.565265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.865 [2024-07-15 16:08:37.565290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.865 [2024-07-15 16:08:37.565305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.865 [2024-07-15 16:08:37.565318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.865 [2024-07-15 16:08:37.565346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.865 qpair failed and we were unable to recover it. 00:27:10.865 [2024-07-15 16:08:37.575148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.865 [2024-07-15 16:08:37.575300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.865 [2024-07-15 16:08:37.575325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.865 [2024-07-15 16:08:37.575339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.865 [2024-07-15 16:08:37.575352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.865 [2024-07-15 16:08:37.575381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.865 qpair failed and we were unable to recover it. 00:27:10.865 [2024-07-15 16:08:37.585155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.865 [2024-07-15 16:08:37.585290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.585315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.585330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.585343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.585371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.595205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.595345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.595370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.595384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.595397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.595425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.605220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.605353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.605378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.605393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.605406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.605433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.615279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.615407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.615431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.615446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.615459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.615487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.625359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.625497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.625522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.625537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.625555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.625584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.635355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.635493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.635518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.635532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.635545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.635573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.645328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.645457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.645483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.645497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.645510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.645538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.655439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.655616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.655641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.655655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.655668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.655695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.665434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.665566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.665592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.665606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.665619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.665647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.675507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.675655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.675681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.675695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.675708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.675736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.685511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.685668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.685694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.685708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.685721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.685749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.695473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.695608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.695633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.695648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.695661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.695688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.705526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.705650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.705675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.705689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.705702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.705730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.715574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.715713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.715737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.715752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.866 [2024-07-15 16:08:37.715770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.866 [2024-07-15 16:08:37.715799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.866 qpair failed and we were unable to recover it. 00:27:10.866 [2024-07-15 16:08:37.725554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.866 [2024-07-15 16:08:37.725718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.866 [2024-07-15 16:08:37.725743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.866 [2024-07-15 16:08:37.725757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.867 [2024-07-15 16:08:37.725770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.867 [2024-07-15 16:08:37.725798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.867 qpair failed and we were unable to recover it. 00:27:10.867 [2024-07-15 16:08:37.735597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.867 [2024-07-15 16:08:37.735742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.867 [2024-07-15 16:08:37.735767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.867 [2024-07-15 16:08:37.735781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.867 [2024-07-15 16:08:37.735794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.867 [2024-07-15 16:08:37.735821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.867 qpair failed and we were unable to recover it. 00:27:10.867 [2024-07-15 16:08:37.745613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.867 [2024-07-15 16:08:37.745743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.867 [2024-07-15 16:08:37.745767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.867 [2024-07-15 16:08:37.745782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.867 [2024-07-15 16:08:37.745794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.867 [2024-07-15 16:08:37.745821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.867 qpair failed and we were unable to recover it. 00:27:10.867 [2024-07-15 16:08:37.755716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.867 [2024-07-15 16:08:37.755893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.867 [2024-07-15 16:08:37.755919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.867 [2024-07-15 16:08:37.755933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.867 [2024-07-15 16:08:37.755945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.867 [2024-07-15 16:08:37.755973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.867 qpair failed and we were unable to recover it. 00:27:10.867 [2024-07-15 16:08:37.765672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.867 [2024-07-15 16:08:37.765840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.867 [2024-07-15 16:08:37.765865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.867 [2024-07-15 16:08:37.765886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.867 [2024-07-15 16:08:37.765901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.867 [2024-07-15 16:08:37.765928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.867 qpair failed and we were unable to recover it. 00:27:10.867 [2024-07-15 16:08:37.775714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.867 [2024-07-15 16:08:37.775847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.867 [2024-07-15 16:08:37.775873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.867 [2024-07-15 16:08:37.775895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.867 [2024-07-15 16:08:37.775908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.867 [2024-07-15 16:08:37.775936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.867 qpair failed and we were unable to recover it. 00:27:10.867 [2024-07-15 16:08:37.785740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.867 [2024-07-15 16:08:37.785896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.867 [2024-07-15 16:08:37.785921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.867 [2024-07-15 16:08:37.785935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.867 [2024-07-15 16:08:37.785948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:10.867 [2024-07-15 16:08:37.785976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.867 qpair failed and we were unable to recover it. 00:27:11.127 [2024-07-15 16:08:37.795788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.127 [2024-07-15 16:08:37.795956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.127 [2024-07-15 16:08:37.795980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.127 [2024-07-15 16:08:37.795994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.127 [2024-07-15 16:08:37.796006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.127 [2024-07-15 16:08:37.796034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.127 qpair failed and we were unable to recover it. 00:27:11.127 [2024-07-15 16:08:37.805791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.127 [2024-07-15 16:08:37.805933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.127 [2024-07-15 16:08:37.805959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.127 [2024-07-15 16:08:37.805979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.127 [2024-07-15 16:08:37.805994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.127 [2024-07-15 16:08:37.806023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.127 qpair failed and we were unable to recover it. 00:27:11.127 [2024-07-15 16:08:37.815858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.127 [2024-07-15 16:08:37.816005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.127 [2024-07-15 16:08:37.816031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.127 [2024-07-15 16:08:37.816046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.127 [2024-07-15 16:08:37.816059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.127 [2024-07-15 16:08:37.816087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.127 qpair failed and we were unable to recover it. 00:27:11.127 [2024-07-15 16:08:37.825857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.127 [2024-07-15 16:08:37.825997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.127 [2024-07-15 16:08:37.826022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.127 [2024-07-15 16:08:37.826037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.127 [2024-07-15 16:08:37.826050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.127 [2024-07-15 16:08:37.826078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.127 qpair failed and we were unable to recover it. 00:27:11.127 [2024-07-15 16:08:37.835909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.127 [2024-07-15 16:08:37.836048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.127 [2024-07-15 16:08:37.836073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.127 [2024-07-15 16:08:37.836088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.127 [2024-07-15 16:08:37.836101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.127 [2024-07-15 16:08:37.836129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.127 qpair failed and we were unable to recover it. 00:27:11.127 [2024-07-15 16:08:37.845979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.127 [2024-07-15 16:08:37.846110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.127 [2024-07-15 16:08:37.846136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.127 [2024-07-15 16:08:37.846150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.127 [2024-07-15 16:08:37.846163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.127 [2024-07-15 16:08:37.846191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.127 qpair failed and we were unable to recover it. 00:27:11.127 [2024-07-15 16:08:37.855945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.127 [2024-07-15 16:08:37.856077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.127 [2024-07-15 16:08:37.856102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.127 [2024-07-15 16:08:37.856116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.127 [2024-07-15 16:08:37.856129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.127 [2024-07-15 16:08:37.856157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.127 qpair failed and we were unable to recover it. 00:27:11.127 [2024-07-15 16:08:37.865978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.127 [2024-07-15 16:08:37.866107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.127 [2024-07-15 16:08:37.866133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.127 [2024-07-15 16:08:37.866147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.127 [2024-07-15 16:08:37.866159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.127 [2024-07-15 16:08:37.866187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.127 qpair failed and we were unable to recover it. 00:27:11.127 [2024-07-15 16:08:37.876055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.127 [2024-07-15 16:08:37.876188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.127 [2024-07-15 16:08:37.876214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.127 [2024-07-15 16:08:37.876228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.127 [2024-07-15 16:08:37.876242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.127 [2024-07-15 16:08:37.876270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.127 qpair failed and we were unable to recover it. 00:27:11.127 [2024-07-15 16:08:37.886056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.127 [2024-07-15 16:08:37.886243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.127 [2024-07-15 16:08:37.886270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.127 [2024-07-15 16:08:37.886284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.127 [2024-07-15 16:08:37.886297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.127 [2024-07-15 16:08:37.886325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.127 qpair failed and we were unable to recover it. 00:27:11.127 [2024-07-15 16:08:37.896097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.127 [2024-07-15 16:08:37.896232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.127 [2024-07-15 16:08:37.896258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.127 [2024-07-15 16:08:37.896281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.127 [2024-07-15 16:08:37.896295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:37.896324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:37.906122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:37.906254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:37.906279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:37.906293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:37.906306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:37.906334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:37.916204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:37.916339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:37.916364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:37.916378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:37.916392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:37.916420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:37.926175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:37.926348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:37.926373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:37.926387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:37.926400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:37.926428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:37.936189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:37.936346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:37.936371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:37.936385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:37.936398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:37.936426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:37.946218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:37.946410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:37.946435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:37.946449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:37.946462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:37.946489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:37.956258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:37.956391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:37.956417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:37.956431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:37.956443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:37.956471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:37.966336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:37.966465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:37.966490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:37.966504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:37.966517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:37.966545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:37.976334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:37.976474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:37.976500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:37.976514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:37.976527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:37.976554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:37.986312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:37.986441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:37.986466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:37.986486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:37.986500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:37.986528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:37.996355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:37.996489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:37.996513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:37.996528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:37.996541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:37.996568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:38.006385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:38.006520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:38.006545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:38.006559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:38.006572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:38.006600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:38.016428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:38.016559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:38.016585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:38.016599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:38.016611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:38.016639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:38.026428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:38.026556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:38.026581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:38.026596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:38.026608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:38.026636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:38.036500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:38.036638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.128 [2024-07-15 16:08:38.036663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.128 [2024-07-15 16:08:38.036677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.128 [2024-07-15 16:08:38.036690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.128 [2024-07-15 16:08:38.036718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.128 qpair failed and we were unable to recover it. 00:27:11.128 [2024-07-15 16:08:38.046497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.128 [2024-07-15 16:08:38.046638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.129 [2024-07-15 16:08:38.046663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.129 [2024-07-15 16:08:38.046678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.129 [2024-07-15 16:08:38.046691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.129 [2024-07-15 16:08:38.046719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.129 qpair failed and we were unable to recover it. 00:27:11.129 [2024-07-15 16:08:38.056542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.129 [2024-07-15 16:08:38.056694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.129 [2024-07-15 16:08:38.056719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.129 [2024-07-15 16:08:38.056733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.129 [2024-07-15 16:08:38.056746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.129 [2024-07-15 16:08:38.056775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.129 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.066573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.066708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.066737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.066752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.066765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.066794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.076590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.076732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.076763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.076778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.076791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.076819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.086618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.086791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.086818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.086832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.086846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.086874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.096618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.096795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.096821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.096838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.096851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.097012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.106667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.106805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.106830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.106844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.106857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.106895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.116728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.116922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.116947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.116961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.116974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.117008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.126722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.126858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.126891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.126907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.126920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.126948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.136774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.136911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.136937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.136951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.136964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.136992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.146780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.146923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.146949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.146963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.146976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.147004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.156839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.157019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.157044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.157059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.157072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.157102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.166850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.166988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.167018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.167033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.167046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.167074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.176912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.177043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.177069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.177084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.177097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.177125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.186920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.187089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.187114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.187129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.187142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.389 [2024-07-15 16:08:38.187170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.389 qpair failed and we were unable to recover it. 00:27:11.389 [2024-07-15 16:08:38.196928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.389 [2024-07-15 16:08:38.197063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.389 [2024-07-15 16:08:38.197089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.389 [2024-07-15 16:08:38.197104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.389 [2024-07-15 16:08:38.197117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.197145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.390 [2024-07-15 16:08:38.207034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.390 [2024-07-15 16:08:38.207167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.390 [2024-07-15 16:08:38.207193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.390 [2024-07-15 16:08:38.207207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.390 [2024-07-15 16:08:38.207220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.207254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.390 [2024-07-15 16:08:38.216985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.390 [2024-07-15 16:08:38.217136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.390 [2024-07-15 16:08:38.217162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.390 [2024-07-15 16:08:38.217176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.390 [2024-07-15 16:08:38.217189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.217216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.390 [2024-07-15 16:08:38.227013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.390 [2024-07-15 16:08:38.227144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.390 [2024-07-15 16:08:38.227169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.390 [2024-07-15 16:08:38.227183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.390 [2024-07-15 16:08:38.227196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.227224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.390 [2024-07-15 16:08:38.237041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.390 [2024-07-15 16:08:38.237173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.390 [2024-07-15 16:08:38.237198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.390 [2024-07-15 16:08:38.237212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.390 [2024-07-15 16:08:38.237224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.237252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.390 [2024-07-15 16:08:38.247071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.390 [2024-07-15 16:08:38.247206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.390 [2024-07-15 16:08:38.247231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.390 [2024-07-15 16:08:38.247245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.390 [2024-07-15 16:08:38.247259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.247287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.390 [2024-07-15 16:08:38.257095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.390 [2024-07-15 16:08:38.257229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.390 [2024-07-15 16:08:38.257259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.390 [2024-07-15 16:08:38.257274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.390 [2024-07-15 16:08:38.257287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.257315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.390 [2024-07-15 16:08:38.267139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.390 [2024-07-15 16:08:38.267272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.390 [2024-07-15 16:08:38.267297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.390 [2024-07-15 16:08:38.267311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.390 [2024-07-15 16:08:38.267324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.267352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.390 [2024-07-15 16:08:38.277143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.390 [2024-07-15 16:08:38.277300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.390 [2024-07-15 16:08:38.277324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.390 [2024-07-15 16:08:38.277339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.390 [2024-07-15 16:08:38.277352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.277379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.390 [2024-07-15 16:08:38.287176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.390 [2024-07-15 16:08:38.287318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.390 [2024-07-15 16:08:38.287342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.390 [2024-07-15 16:08:38.287356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.390 [2024-07-15 16:08:38.287369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.287398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.390 [2024-07-15 16:08:38.297194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.390 [2024-07-15 16:08:38.297323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.390 [2024-07-15 16:08:38.297349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.390 [2024-07-15 16:08:38.297364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.390 [2024-07-15 16:08:38.297377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.297410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.390 [2024-07-15 16:08:38.307213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.390 [2024-07-15 16:08:38.307342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.390 [2024-07-15 16:08:38.307367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.390 [2024-07-15 16:08:38.307382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.390 [2024-07-15 16:08:38.307395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.307423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.390 [2024-07-15 16:08:38.317256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.390 [2024-07-15 16:08:38.317393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.390 [2024-07-15 16:08:38.317418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.390 [2024-07-15 16:08:38.317432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.390 [2024-07-15 16:08:38.317446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.390 [2024-07-15 16:08:38.317473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.390 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.327373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.327509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.327533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.327547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.327560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.327588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.337352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.337485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.337510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.337524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.337536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.337564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.347371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.347570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.347603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.347619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.347632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.347661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.357368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.357510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.357535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.357550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.357563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.357591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.367391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.367526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.367551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.367565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.367578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.367606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.377461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.377614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.377641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.377661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.377674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.377704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.387447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.387586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.387612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.387627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.387645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.387675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.397482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.397626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.397652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.397667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.397680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.397707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.407514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.407669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.407695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.407709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.407722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.407751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.417573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.417745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.417770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.417784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.417798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.417826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.427555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.427698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.427724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.427738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.427750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.427778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.437609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.437752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.437777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.437792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.437805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.437832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.447617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.447752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.447777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.447791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.447805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.447833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.457641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.457767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.457792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.457807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.457820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.457847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.467709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.467906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.467931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.467945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.467959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.467987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.477725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.477863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.477895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.477911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.477930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.477959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.487730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.487870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.487902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.487917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.487930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.487958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.497762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.497936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.497963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.497978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.497991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.498018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.507793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.507957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.507983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.507997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.508010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.508040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.651 qpair failed and we were unable to recover it. 00:27:11.651 [2024-07-15 16:08:38.517814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.651 [2024-07-15 16:08:38.517996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.651 [2024-07-15 16:08:38.518021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.651 [2024-07-15 16:08:38.518036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.651 [2024-07-15 16:08:38.518048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.651 [2024-07-15 16:08:38.518077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.652 qpair failed and we were unable to recover it. 00:27:11.652 [2024-07-15 16:08:38.527827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.652 [2024-07-15 16:08:38.527975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.652 [2024-07-15 16:08:38.528001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.652 [2024-07-15 16:08:38.528015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.652 [2024-07-15 16:08:38.528028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.652 [2024-07-15 16:08:38.528055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.652 qpair failed and we were unable to recover it. 00:27:11.652 [2024-07-15 16:08:38.537890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.652 [2024-07-15 16:08:38.538056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.652 [2024-07-15 16:08:38.538081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.652 [2024-07-15 16:08:38.538096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.652 [2024-07-15 16:08:38.538108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.652 [2024-07-15 16:08:38.538136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.652 qpair failed and we were unable to recover it. 00:27:11.652 [2024-07-15 16:08:38.547918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.652 [2024-07-15 16:08:38.548051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.652 [2024-07-15 16:08:38.548076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.652 [2024-07-15 16:08:38.548091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.652 [2024-07-15 16:08:38.548103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.652 [2024-07-15 16:08:38.548131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.652 qpair failed and we were unable to recover it. 00:27:11.652 [2024-07-15 16:08:38.558012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.652 [2024-07-15 16:08:38.558197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.652 [2024-07-15 16:08:38.558225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.652 [2024-07-15 16:08:38.558243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.652 [2024-07-15 16:08:38.558256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.652 [2024-07-15 16:08:38.558285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.652 qpair failed and we were unable to recover it. 00:27:11.652 [2024-07-15 16:08:38.567952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.652 [2024-07-15 16:08:38.568084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.652 [2024-07-15 16:08:38.568110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.652 [2024-07-15 16:08:38.568131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.652 [2024-07-15 16:08:38.568145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.652 [2024-07-15 16:08:38.568173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.652 qpair failed and we were unable to recover it. 00:27:11.652 [2024-07-15 16:08:38.578082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.652 [2024-07-15 16:08:38.578264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.652 [2024-07-15 16:08:38.578289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.652 [2024-07-15 16:08:38.578303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.652 [2024-07-15 16:08:38.578316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.652 [2024-07-15 16:08:38.578344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.652 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.588026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.588163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.588189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.588203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.588216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.588244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.598089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.598261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.598286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.598300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.598313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.598341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.608084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.608227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.608252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.608267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.608280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.608307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.618138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.618271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.618297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.618311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.618324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.618351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.628174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.628318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.628344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.628359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.628372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.628400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.638191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.638331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.638357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.638371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.638384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.638411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.648196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.648338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.648363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.648377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.648390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.648418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.658243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.658406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.658430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.658451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.658465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.658493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.668254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.668398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.668423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.668438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.668451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.668480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.678290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.678456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.678481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.678495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.678508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.678536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.688301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.688434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.688460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.688474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.688486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.688514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.698367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.698508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.698534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.698547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.698560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.698588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.708381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.708515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.708541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.708556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.708568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.708596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.718411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.718547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.910 [2024-07-15 16:08:38.718573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.910 [2024-07-15 16:08:38.718587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.910 [2024-07-15 16:08:38.718600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.910 [2024-07-15 16:08:38.718627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.910 qpair failed and we were unable to recover it. 00:27:11.910 [2024-07-15 16:08:38.728391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.910 [2024-07-15 16:08:38.728521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.911 [2024-07-15 16:08:38.728546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.911 [2024-07-15 16:08:38.728560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.911 [2024-07-15 16:08:38.728573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.911 [2024-07-15 16:08:38.728601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.911 qpair failed and we were unable to recover it. 00:27:11.911 [2024-07-15 16:08:38.738431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.911 [2024-07-15 16:08:38.738559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.911 [2024-07-15 16:08:38.738583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.911 [2024-07-15 16:08:38.738597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.911 [2024-07-15 16:08:38.738610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.911 [2024-07-15 16:08:38.738638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.911 qpair failed and we were unable to recover it. 00:27:11.911 [2024-07-15 16:08:38.748480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.911 [2024-07-15 16:08:38.748625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.911 [2024-07-15 16:08:38.748649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.911 [2024-07-15 16:08:38.748669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.911 [2024-07-15 16:08:38.748682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.911 [2024-07-15 16:08:38.748710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.911 qpair failed and we were unable to recover it. 00:27:11.911 [2024-07-15 16:08:38.758488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.911 [2024-07-15 16:08:38.758636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.911 [2024-07-15 16:08:38.758661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.911 [2024-07-15 16:08:38.758675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.911 [2024-07-15 16:08:38.758689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.911 [2024-07-15 16:08:38.758716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.911 qpair failed and we were unable to recover it. 00:27:11.911 [2024-07-15 16:08:38.768516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.911 [2024-07-15 16:08:38.768651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.911 [2024-07-15 16:08:38.768676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.911 [2024-07-15 16:08:38.768690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.911 [2024-07-15 16:08:38.768703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.911 [2024-07-15 16:08:38.768731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.911 qpair failed and we were unable to recover it. 00:27:11.911 [2024-07-15 16:08:38.778534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.911 [2024-07-15 16:08:38.778661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.911 [2024-07-15 16:08:38.778687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.911 [2024-07-15 16:08:38.778701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.911 [2024-07-15 16:08:38.778714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.911 [2024-07-15 16:08:38.778742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.911 qpair failed and we were unable to recover it. 00:27:11.911 [2024-07-15 16:08:38.788548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.911 [2024-07-15 16:08:38.788679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.911 [2024-07-15 16:08:38.788705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.911 [2024-07-15 16:08:38.788719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.911 [2024-07-15 16:08:38.788731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.911 [2024-07-15 16:08:38.788759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.911 qpair failed and we were unable to recover it. 00:27:11.911 [2024-07-15 16:08:38.798609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.911 [2024-07-15 16:08:38.798742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.911 [2024-07-15 16:08:38.798766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.911 [2024-07-15 16:08:38.798780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.911 [2024-07-15 16:08:38.798793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.911 [2024-07-15 16:08:38.798820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.911 qpair failed and we were unable to recover it. 00:27:11.911 [2024-07-15 16:08:38.808663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.911 [2024-07-15 16:08:38.808799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.911 [2024-07-15 16:08:38.808824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.911 [2024-07-15 16:08:38.808839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.911 [2024-07-15 16:08:38.808852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.911 [2024-07-15 16:08:38.808889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.911 qpair failed and we were unable to recover it. 00:27:11.911 [2024-07-15 16:08:38.818648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.911 [2024-07-15 16:08:38.818780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.911 [2024-07-15 16:08:38.818805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.911 [2024-07-15 16:08:38.818819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.911 [2024-07-15 16:08:38.818831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.911 [2024-07-15 16:08:38.818860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.911 qpair failed and we were unable to recover it. 00:27:11.911 [2024-07-15 16:08:38.828660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.911 [2024-07-15 16:08:38.828785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.911 [2024-07-15 16:08:38.828810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.911 [2024-07-15 16:08:38.828824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.911 [2024-07-15 16:08:38.828837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.911 [2024-07-15 16:08:38.828865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.911 qpair failed and we were unable to recover it. 00:27:11.911 [2024-07-15 16:08:38.838721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.911 [2024-07-15 16:08:38.838856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.911 [2024-07-15 16:08:38.838893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.911 [2024-07-15 16:08:38.838909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.911 [2024-07-15 16:08:38.838922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:11.911 [2024-07-15 16:08:38.838949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.911 qpair failed and we were unable to recover it. 00:27:12.167 [2024-07-15 16:08:38.848730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.167 [2024-07-15 16:08:38.848867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.167 [2024-07-15 16:08:38.848898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.167 [2024-07-15 16:08:38.848913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.167 [2024-07-15 16:08:38.848926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.167 [2024-07-15 16:08:38.848955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.167 qpair failed and we were unable to recover it. 00:27:12.167 [2024-07-15 16:08:38.858803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.167 [2024-07-15 16:08:38.858946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.167 [2024-07-15 16:08:38.858972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.167 [2024-07-15 16:08:38.858986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.167 [2024-07-15 16:08:38.858999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.167 [2024-07-15 16:08:38.859027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.167 qpair failed and we were unable to recover it. 00:27:12.167 [2024-07-15 16:08:38.868803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.167 [2024-07-15 16:08:38.868945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.167 [2024-07-15 16:08:38.868971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.167 [2024-07-15 16:08:38.868985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.167 [2024-07-15 16:08:38.868998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.167 [2024-07-15 16:08:38.869026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.167 qpair failed and we were unable to recover it. 00:27:12.167 [2024-07-15 16:08:38.878854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.167 [2024-07-15 16:08:38.879002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.879027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.879041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.879054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.879088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:38.888843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:38.889000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.889025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.889039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.889052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.889080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:38.898940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:38.899089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.899114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.899128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.899141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.899169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:38.908901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:38.909033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.909058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.909072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.909084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.909112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:38.918961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:38.919095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.919122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.919136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.919149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.919177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:38.928977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:38.929108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.929138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.929153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.929166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.929194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:38.938981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:38.939109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.939134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.939148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.939161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.939190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:38.949015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:38.949140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.949165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.949179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.949192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.949219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:38.959168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:38.959317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.959342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.959356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.959369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.959396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:38.969078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:38.969264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.969289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.969303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.969316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.969349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:38.979124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:38.979261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.979287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.979301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.979314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.979342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:38.989128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:38.989273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.989299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.989313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.989326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.989354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:38.999192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:38.999363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:38.999388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:38.999402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:38.999415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:38.999442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:39.009196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:39.009326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:39.009351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:39.009365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:39.009377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:39.009405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:39.019252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:39.019400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:39.019430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:39.019445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:39.019458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:39.019486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:39.029260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:39.029389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:39.029414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:39.029428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:39.029442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:39.029469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:39.039302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:39.039436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:39.039460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:39.039475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:39.039488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:39.039515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:39.049329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:39.049465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:39.049489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:39.049504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:39.049516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:39.049545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:39.059455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:39.059605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:39.059631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:39.059646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:39.059658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:39.059691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:39.069383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:39.069514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:39.069539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:39.069553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:39.069566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:39.069594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:39.079430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.168 [2024-07-15 16:08:39.079578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.168 [2024-07-15 16:08:39.079603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.168 [2024-07-15 16:08:39.079618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.168 [2024-07-15 16:08:39.079631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.168 [2024-07-15 16:08:39.079658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.168 qpair failed and we were unable to recover it. 00:27:12.168 [2024-07-15 16:08:39.089450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.169 [2024-07-15 16:08:39.089585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.169 [2024-07-15 16:08:39.089612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.169 [2024-07-15 16:08:39.089626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.169 [2024-07-15 16:08:39.089640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.169 [2024-07-15 16:08:39.089668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.169 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.099469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.099641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.099667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.099681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.099694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.099721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.109474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.109624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.109654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.109669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.109683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.109711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.119512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.119647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.119672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.119687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.119700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.119728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.129618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.129788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.129814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.129829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.129842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.129870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.139588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.139766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.139792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.139806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.139819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.139847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.149593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.149727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.149751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.149765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.149784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.149814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.159617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.159795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.159820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.159834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.159847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.159874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.169628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.169764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.169789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.169804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.169817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.169845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.179679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.179812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.179838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.179852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.179865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.179902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.189693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.189827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.189853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.189867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.189887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.189916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.199791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.199948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.199977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.199993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.200006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.200035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.209761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.209954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.209980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.209994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.210007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.210036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.219785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.219933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.219959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.219974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.219987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.220015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.229832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.229975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.230002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.230022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.230035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.230065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.239891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.240027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.240053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.240067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.240086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.240115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.249868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.250010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.250035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.250050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.250063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.250092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.259894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.260023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.260048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.260062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.260076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.260104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.269961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.270100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.270125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.270139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.270152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.426 [2024-07-15 16:08:39.270180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.426 qpair failed and we were unable to recover it. 00:27:12.426 [2024-07-15 16:08:39.279958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.426 [2024-07-15 16:08:39.280090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.426 [2024-07-15 16:08:39.280116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.426 [2024-07-15 16:08:39.280130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.426 [2024-07-15 16:08:39.280143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.427 [2024-07-15 16:08:39.280171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.427 qpair failed and we were unable to recover it. 00:27:12.427 [2024-07-15 16:08:39.290041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.427 [2024-07-15 16:08:39.290187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.427 [2024-07-15 16:08:39.290212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.427 [2024-07-15 16:08:39.290227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.427 [2024-07-15 16:08:39.290240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.427 [2024-07-15 16:08:39.290268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.427 qpair failed and we were unable to recover it. 00:27:12.427 [2024-07-15 16:08:39.300010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.427 [2024-07-15 16:08:39.300141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.427 [2024-07-15 16:08:39.300166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.427 [2024-07-15 16:08:39.300180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.427 [2024-07-15 16:08:39.300193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.427 [2024-07-15 16:08:39.300220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.427 qpair failed and we were unable to recover it. 00:27:12.427 [2024-07-15 16:08:39.310036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.427 [2024-07-15 16:08:39.310168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.427 [2024-07-15 16:08:39.310193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.427 [2024-07-15 16:08:39.310208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.427 [2024-07-15 16:08:39.310221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.427 [2024-07-15 16:08:39.310249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.427 qpair failed and we were unable to recover it. 00:27:12.427 [2024-07-15 16:08:39.320093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.427 [2024-07-15 16:08:39.320234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.427 [2024-07-15 16:08:39.320259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.427 [2024-07-15 16:08:39.320273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.427 [2024-07-15 16:08:39.320286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.427 [2024-07-15 16:08:39.320314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.427 qpair failed and we were unable to recover it. 00:27:12.427 [2024-07-15 16:08:39.330111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.427 [2024-07-15 16:08:39.330249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.427 [2024-07-15 16:08:39.330275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.427 [2024-07-15 16:08:39.330296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.427 [2024-07-15 16:08:39.330310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.427 [2024-07-15 16:08:39.330338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.427 qpair failed and we were unable to recover it. 00:27:12.427 [2024-07-15 16:08:39.340121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.427 [2024-07-15 16:08:39.340248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.427 [2024-07-15 16:08:39.340274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.427 [2024-07-15 16:08:39.340288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.427 [2024-07-15 16:08:39.340302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.427 [2024-07-15 16:08:39.340329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.427 qpair failed and we were unable to recover it. 00:27:12.427 [2024-07-15 16:08:39.350145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.427 [2024-07-15 16:08:39.350274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.427 [2024-07-15 16:08:39.350299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.427 [2024-07-15 16:08:39.350313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.427 [2024-07-15 16:08:39.350326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.427 [2024-07-15 16:08:39.350355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.427 qpair failed and we were unable to recover it. 00:27:12.684 [2024-07-15 16:08:39.360217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.684 [2024-07-15 16:08:39.360372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.684 [2024-07-15 16:08:39.360397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.684 [2024-07-15 16:08:39.360411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.684 [2024-07-15 16:08:39.360424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.684 [2024-07-15 16:08:39.360452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.684 qpair failed and we were unable to recover it. 00:27:12.684 [2024-07-15 16:08:39.370240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.684 [2024-07-15 16:08:39.370377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.684 [2024-07-15 16:08:39.370403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.684 [2024-07-15 16:08:39.370418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.684 [2024-07-15 16:08:39.370431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.684 [2024-07-15 16:08:39.370458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.684 qpair failed and we were unable to recover it. 00:27:12.684 [2024-07-15 16:08:39.380250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.684 [2024-07-15 16:08:39.380394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.684 [2024-07-15 16:08:39.380419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.684 [2024-07-15 16:08:39.380433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.684 [2024-07-15 16:08:39.380446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.684 [2024-07-15 16:08:39.380474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.684 qpair failed and we were unable to recover it. 00:27:12.684 [2024-07-15 16:08:39.390277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.684 [2024-07-15 16:08:39.390412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.684 [2024-07-15 16:08:39.390437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.684 [2024-07-15 16:08:39.390451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.684 [2024-07-15 16:08:39.390464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.684 [2024-07-15 16:08:39.390492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.684 qpair failed and we were unable to recover it. 00:27:12.684 [2024-07-15 16:08:39.400289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.684 [2024-07-15 16:08:39.400436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.684 [2024-07-15 16:08:39.400462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.684 [2024-07-15 16:08:39.400477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.684 [2024-07-15 16:08:39.400489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.684 [2024-07-15 16:08:39.400517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.684 qpair failed and we were unable to recover it. 00:27:12.684 [2024-07-15 16:08:39.410344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.684 [2024-07-15 16:08:39.410477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.684 [2024-07-15 16:08:39.410502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.684 [2024-07-15 16:08:39.410516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.684 [2024-07-15 16:08:39.410529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.684 [2024-07-15 16:08:39.410557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.684 qpair failed and we were unable to recover it. 00:27:12.684 [2024-07-15 16:08:39.420326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.684 [2024-07-15 16:08:39.420455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.684 [2024-07-15 16:08:39.420480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.420500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.420514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.420542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.430351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.685 [2024-07-15 16:08:39.430477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.685 [2024-07-15 16:08:39.430501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.430516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.430529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.430556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.440401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.685 [2024-07-15 16:08:39.440542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.685 [2024-07-15 16:08:39.440567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.440581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.440594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.440622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.450443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.685 [2024-07-15 16:08:39.450583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.685 [2024-07-15 16:08:39.450608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.450622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.450635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.450663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.460478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.685 [2024-07-15 16:08:39.460658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.685 [2024-07-15 16:08:39.460684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.460698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.460711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.460739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.470490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.685 [2024-07-15 16:08:39.470618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.685 [2024-07-15 16:08:39.470643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.470657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.470670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.470698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.480534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.685 [2024-07-15 16:08:39.480690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.685 [2024-07-15 16:08:39.480717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.480731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.480744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.480775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.490567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.685 [2024-07-15 16:08:39.490703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.685 [2024-07-15 16:08:39.490729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.490743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.490756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.490784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.500581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.685 [2024-07-15 16:08:39.500713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.685 [2024-07-15 16:08:39.500739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.500754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.500767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.500794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.510598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.685 [2024-07-15 16:08:39.510755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.685 [2024-07-15 16:08:39.510781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.510801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.510815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.510842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.520723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.685 [2024-07-15 16:08:39.520859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.685 [2024-07-15 16:08:39.520891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.520906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.520920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.520948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.530687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.685 [2024-07-15 16:08:39.530858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.685 [2024-07-15 16:08:39.530890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.530906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.530919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.530947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.540688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.685 [2024-07-15 16:08:39.540822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.685 [2024-07-15 16:08:39.540848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.685 [2024-07-15 16:08:39.540862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.685 [2024-07-15 16:08:39.540882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.685 [2024-07-15 16:08:39.540913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.685 qpair failed and we were unable to recover it. 00:27:12.685 [2024-07-15 16:08:39.550708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.686 [2024-07-15 16:08:39.550839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.686 [2024-07-15 16:08:39.550864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.686 [2024-07-15 16:08:39.550884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.686 [2024-07-15 16:08:39.550900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.686 [2024-07-15 16:08:39.550929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.686 qpair failed and we were unable to recover it. 00:27:12.686 [2024-07-15 16:08:39.560866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.686 [2024-07-15 16:08:39.561051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.686 [2024-07-15 16:08:39.561076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.686 [2024-07-15 16:08:39.561090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.686 [2024-07-15 16:08:39.561103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.686 [2024-07-15 16:08:39.561132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.686 qpair failed and we were unable to recover it. 00:27:12.686 [2024-07-15 16:08:39.570785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.686 [2024-07-15 16:08:39.570923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.686 [2024-07-15 16:08:39.570948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.686 [2024-07-15 16:08:39.570962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.686 [2024-07-15 16:08:39.570974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.686 [2024-07-15 16:08:39.571003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.686 qpair failed and we were unable to recover it. 00:27:12.686 [2024-07-15 16:08:39.580843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.686 [2024-07-15 16:08:39.580998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.686 [2024-07-15 16:08:39.581024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.686 [2024-07-15 16:08:39.581039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.686 [2024-07-15 16:08:39.581052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.686 [2024-07-15 16:08:39.581080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.686 qpair failed and we were unable to recover it. 00:27:12.686 [2024-07-15 16:08:39.590838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.686 [2024-07-15 16:08:39.590989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.686 [2024-07-15 16:08:39.591014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.686 [2024-07-15 16:08:39.591029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.686 [2024-07-15 16:08:39.591042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.686 [2024-07-15 16:08:39.591070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.686 qpair failed and we were unable to recover it. 00:27:12.686 [2024-07-15 16:08:39.600875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.686 [2024-07-15 16:08:39.601023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.686 [2024-07-15 16:08:39.601053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.686 [2024-07-15 16:08:39.601069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.686 [2024-07-15 16:08:39.601082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.686 [2024-07-15 16:08:39.601110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.686 qpair failed and we were unable to recover it. 00:27:12.686 [2024-07-15 16:08:39.610917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.686 [2024-07-15 16:08:39.611085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.686 [2024-07-15 16:08:39.611110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.686 [2024-07-15 16:08:39.611125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.686 [2024-07-15 16:08:39.611138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.686 [2024-07-15 16:08:39.611165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.686 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.620936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.946 [2024-07-15 16:08:39.621073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.946 [2024-07-15 16:08:39.621098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.946 [2024-07-15 16:08:39.621112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.946 [2024-07-15 16:08:39.621126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.946 [2024-07-15 16:08:39.621153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.631037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.946 [2024-07-15 16:08:39.631171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.946 [2024-07-15 16:08:39.631196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.946 [2024-07-15 16:08:39.631210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.946 [2024-07-15 16:08:39.631222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.946 [2024-07-15 16:08:39.631250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.641008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.946 [2024-07-15 16:08:39.641154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.946 [2024-07-15 16:08:39.641180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.946 [2024-07-15 16:08:39.641194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.946 [2024-07-15 16:08:39.641207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.946 [2024-07-15 16:08:39.641235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.651053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.946 [2024-07-15 16:08:39.651197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.946 [2024-07-15 16:08:39.651222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.946 [2024-07-15 16:08:39.651236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.946 [2024-07-15 16:08:39.651249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.946 [2024-07-15 16:08:39.651277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.661137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.946 [2024-07-15 16:08:39.661309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.946 [2024-07-15 16:08:39.661334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.946 [2024-07-15 16:08:39.661349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.946 [2024-07-15 16:08:39.661362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.946 [2024-07-15 16:08:39.661390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.671136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.946 [2024-07-15 16:08:39.671335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.946 [2024-07-15 16:08:39.671360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.946 [2024-07-15 16:08:39.671374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.946 [2024-07-15 16:08:39.671387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.946 [2024-07-15 16:08:39.671414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.681132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.946 [2024-07-15 16:08:39.681319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.946 [2024-07-15 16:08:39.681345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.946 [2024-07-15 16:08:39.681359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.946 [2024-07-15 16:08:39.681372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.946 [2024-07-15 16:08:39.681400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.691185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.946 [2024-07-15 16:08:39.691327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.946 [2024-07-15 16:08:39.691358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.946 [2024-07-15 16:08:39.691373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.946 [2024-07-15 16:08:39.691386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.946 [2024-07-15 16:08:39.691414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.701136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.946 [2024-07-15 16:08:39.701263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.946 [2024-07-15 16:08:39.701288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.946 [2024-07-15 16:08:39.701303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.946 [2024-07-15 16:08:39.701316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.946 [2024-07-15 16:08:39.701343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.711185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.946 [2024-07-15 16:08:39.711310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.946 [2024-07-15 16:08:39.711334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.946 [2024-07-15 16:08:39.711348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.946 [2024-07-15 16:08:39.711361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.946 [2024-07-15 16:08:39.711389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.721224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.946 [2024-07-15 16:08:39.721428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.946 [2024-07-15 16:08:39.721453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.946 [2024-07-15 16:08:39.721467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.946 [2024-07-15 16:08:39.721480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.946 [2024-07-15 16:08:39.721508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.731251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.946 [2024-07-15 16:08:39.731389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.946 [2024-07-15 16:08:39.731414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.946 [2024-07-15 16:08:39.731428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.946 [2024-07-15 16:08:39.731442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.946 [2024-07-15 16:08:39.731475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-07-15 16:08:39.741305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.741458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.741484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.741498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.741511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.741539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.751317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.751443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.751469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.751484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.751496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.751524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.761317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.761453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.761479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.761494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.761506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.761534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.771380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.771532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.771557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.771571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.771584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.771612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.781360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.781489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.781520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.781535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.781548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.781576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.791386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.791519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.791544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.791558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.791572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.791599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.801432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.801586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.801609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.801623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.801635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.801662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.811457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.811603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.811628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.811641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.811654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.811683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.821483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.821626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.821651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.821665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.821678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.821712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.831565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.831703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.831728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.831742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.831755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.831783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.841567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.841733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.841758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.841772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.841785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.841813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.851629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.851793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.851820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.851835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.851851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.851888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.861598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.861730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.861756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.861770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.861783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.861810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-07-15 16:08:39.871647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.947 [2024-07-15 16:08:39.871790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.947 [2024-07-15 16:08:39.871822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.947 [2024-07-15 16:08:39.871837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.947 [2024-07-15 16:08:39.871850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:12.947 [2024-07-15 16:08:39.871884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.947 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:39.881689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:39.881828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:39.881853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:39.881868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:39.881888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:39.881917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:39.891702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:39.891839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:39.891865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:39.891887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:39.891902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:39.891931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:39.901729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:39.901865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:39.901896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:39.901911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:39.901925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:39.901952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:39.911772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:39.911955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:39.911980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:39.911994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:39.912013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:39.912043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:39.921784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:39.921960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:39.921986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:39.922000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:39.922014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:39.922042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:39.931804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:39.931944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:39.931969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:39.931983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:39.931996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:39.932024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:39.941820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:39.941957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:39.941983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:39.941997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:39.942009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:39.942037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:39.951874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:39.952008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:39.952033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:39.952048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:39.952061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:39.952089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:39.961930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:39.962076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:39.962102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:39.962116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:39.962128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:39.962157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:39.971929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:39.972072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:39.972097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:39.972111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:39.972124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:39.972153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:39.981933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:39.982074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:39.982100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:39.982114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:39.982127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:39.982155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:39.991957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:39.992089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:39.992115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:39.992129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:39.992142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:39.992170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:40.002139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.208 [2024-07-15 16:08:40.002312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.208 [2024-07-15 16:08:40.002337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.208 [2024-07-15 16:08:40.002352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.208 [2024-07-15 16:08:40.002370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.208 [2024-07-15 16:08:40.002401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-07-15 16:08:40.012087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.012236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.012266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.012281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.012295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.012325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-07-15 16:08:40.022160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.022315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.022341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.022356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.022369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.022397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-07-15 16:08:40.032139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.032306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.032332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.032346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.032359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.032388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-07-15 16:08:40.042237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.042402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.042432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.042448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.042462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.042494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-07-15 16:08:40.052167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.052358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.052385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.052405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.052419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.052449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-07-15 16:08:40.062189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.062340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.062366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.062381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.062394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.062423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-07-15 16:08:40.072264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.072397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.072422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.072437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.072450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.072478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-07-15 16:08:40.082244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.082414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.082440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.082454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.082467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.082495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-07-15 16:08:40.092256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.092391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.092417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.092431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.092450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.092479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-07-15 16:08:40.102361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.102529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.102555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.102570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.102583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.102610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-07-15 16:08:40.112353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.112492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.112517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.112532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.112545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.112573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-07-15 16:08:40.122383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.122548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.122573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.122587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.122600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.122628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-07-15 16:08:40.132417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.209 [2024-07-15 16:08:40.132556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.209 [2024-07-15 16:08:40.132581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.209 [2024-07-15 16:08:40.132596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.209 [2024-07-15 16:08:40.132609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.209 [2024-07-15 16:08:40.132636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.467 [2024-07-15 16:08:40.142495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.467 [2024-07-15 16:08:40.142629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.467 [2024-07-15 16:08:40.142655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.467 [2024-07-15 16:08:40.142669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.467 [2024-07-15 16:08:40.142683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.467 [2024-07-15 16:08:40.142711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-07-15 16:08:40.152500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.467 [2024-07-15 16:08:40.152665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.467 [2024-07-15 16:08:40.152691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.467 [2024-07-15 16:08:40.152706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.467 [2024-07-15 16:08:40.152719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.467 [2024-07-15 16:08:40.152746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-07-15 16:08:40.162485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.467 [2024-07-15 16:08:40.162659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.467 [2024-07-15 16:08:40.162685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.467 [2024-07-15 16:08:40.162699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.467 [2024-07-15 16:08:40.162712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.467 [2024-07-15 16:08:40.162741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-07-15 16:08:40.172501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.467 [2024-07-15 16:08:40.172642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.467 [2024-07-15 16:08:40.172668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.467 [2024-07-15 16:08:40.172683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.467 [2024-07-15 16:08:40.172696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.467 [2024-07-15 16:08:40.172724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-07-15 16:08:40.182547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.467 [2024-07-15 16:08:40.182696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.467 [2024-07-15 16:08:40.182721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.467 [2024-07-15 16:08:40.182742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.467 [2024-07-15 16:08:40.182755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.467 [2024-07-15 16:08:40.182784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-07-15 16:08:40.192557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.467 [2024-07-15 16:08:40.192690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.467 [2024-07-15 16:08:40.192715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.467 [2024-07-15 16:08:40.192730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.467 [2024-07-15 16:08:40.192743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.467 [2024-07-15 16:08:40.192770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-07-15 16:08:40.202627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.467 [2024-07-15 16:08:40.202796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.467 [2024-07-15 16:08:40.202821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.467 [2024-07-15 16:08:40.202835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.467 [2024-07-15 16:08:40.202848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.467 [2024-07-15 16:08:40.202883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-07-15 16:08:40.212632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.467 [2024-07-15 16:08:40.212822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.467 [2024-07-15 16:08:40.212847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.467 [2024-07-15 16:08:40.212860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.467 [2024-07-15 16:08:40.212873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.467 [2024-07-15 16:08:40.212911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-07-15 16:08:40.222668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.468 [2024-07-15 16:08:40.222800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.468 [2024-07-15 16:08:40.222825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.468 [2024-07-15 16:08:40.222840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.468 [2024-07-15 16:08:40.222853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.468 [2024-07-15 16:08:40.222890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-07-15 16:08:40.232713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.468 [2024-07-15 16:08:40.232862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.468 [2024-07-15 16:08:40.232893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.468 [2024-07-15 16:08:40.232908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.468 [2024-07-15 16:08:40.232921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.468 [2024-07-15 16:08:40.232949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-07-15 16:08:40.242752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.468 [2024-07-15 16:08:40.242921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.468 [2024-07-15 16:08:40.242947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.468 [2024-07-15 16:08:40.242961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.468 [2024-07-15 16:08:40.242974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.468 [2024-07-15 16:08:40.243002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-07-15 16:08:40.252780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.468 [2024-07-15 16:08:40.252925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.468 [2024-07-15 16:08:40.252951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.468 [2024-07-15 16:08:40.252965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.468 [2024-07-15 16:08:40.252978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.468 [2024-07-15 16:08:40.253006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-07-15 16:08:40.262770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.468 [2024-07-15 16:08:40.262909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.468 [2024-07-15 16:08:40.262934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.468 [2024-07-15 16:08:40.262949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.468 [2024-07-15 16:08:40.262961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.468 [2024-07-15 16:08:40.262989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-07-15 16:08:40.272865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.468 [2024-07-15 16:08:40.273036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.468 [2024-07-15 16:08:40.273064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.468 [2024-07-15 16:08:40.273086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.468 [2024-07-15 16:08:40.273100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.468 [2024-07-15 16:08:40.273130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-07-15 16:08:40.282845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.468 [2024-07-15 16:08:40.283037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.468 [2024-07-15 16:08:40.283063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.468 [2024-07-15 16:08:40.283078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.468 [2024-07-15 16:08:40.283091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.468 [2024-07-15 16:08:40.283120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-07-15 16:08:40.292859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.468 [2024-07-15 16:08:40.292998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.468 [2024-07-15 16:08:40.293023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.468 [2024-07-15 16:08:40.293038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.468 [2024-07-15 16:08:40.293051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.468 [2024-07-15 16:08:40.293079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-07-15 16:08:40.302894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.468 [2024-07-15 16:08:40.303033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.468 [2024-07-15 16:08:40.303059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.468 [2024-07-15 16:08:40.303073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.468 [2024-07-15 16:08:40.303087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.468 [2024-07-15 16:08:40.303115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-07-15 16:08:40.312898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.468 [2024-07-15 16:08:40.313028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.468 [2024-07-15 16:08:40.313053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.468 [2024-07-15 16:08:40.313067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.468 [2024-07-15 16:08:40.313080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.468 [2024-07-15 16:08:40.313108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-07-15 16:08:40.322967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.468 [2024-07-15 16:08:40.323107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.468 [2024-07-15 16:08:40.323133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.468 [2024-07-15 16:08:40.323147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.468 [2024-07-15 16:08:40.323160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.468 [2024-07-15 16:08:40.323188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-07-15 16:08:40.332984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.468 [2024-07-15 16:08:40.333118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.468 [2024-07-15 16:08:40.333151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.468 [2024-07-15 16:08:40.333165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.469 [2024-07-15 16:08:40.333179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.469 [2024-07-15 16:08:40.333207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-07-15 16:08:40.343034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.469 [2024-07-15 16:08:40.343169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.469 [2024-07-15 16:08:40.343194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.469 [2024-07-15 16:08:40.343210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.469 [2024-07-15 16:08:40.343223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.469 [2024-07-15 16:08:40.343251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-07-15 16:08:40.353063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.469 [2024-07-15 16:08:40.353191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.469 [2024-07-15 16:08:40.353217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.469 [2024-07-15 16:08:40.353231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.469 [2024-07-15 16:08:40.353244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.469 [2024-07-15 16:08:40.353272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-07-15 16:08:40.363092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.469 [2024-07-15 16:08:40.363245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.469 [2024-07-15 16:08:40.363270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.469 [2024-07-15 16:08:40.363291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.469 [2024-07-15 16:08:40.363305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.469 [2024-07-15 16:08:40.363333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-07-15 16:08:40.373127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.469 [2024-07-15 16:08:40.373265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.469 [2024-07-15 16:08:40.373291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.469 [2024-07-15 16:08:40.373306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.469 [2024-07-15 16:08:40.373318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.469 [2024-07-15 16:08:40.373346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-07-15 16:08:40.383103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.469 [2024-07-15 16:08:40.383253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.469 [2024-07-15 16:08:40.383279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.469 [2024-07-15 16:08:40.383293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.469 [2024-07-15 16:08:40.383306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.469 [2024-07-15 16:08:40.383334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-07-15 16:08:40.393144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.469 [2024-07-15 16:08:40.393271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.469 [2024-07-15 16:08:40.393297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.469 [2024-07-15 16:08:40.393311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.469 [2024-07-15 16:08:40.393324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.469 [2024-07-15 16:08:40.393352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.403232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.403380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.403405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.728 [2024-07-15 16:08:40.403419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.728 [2024-07-15 16:08:40.403432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.728 [2024-07-15 16:08:40.403460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.728 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.413207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.413347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.413372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.728 [2024-07-15 16:08:40.413385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.728 [2024-07-15 16:08:40.413399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.728 [2024-07-15 16:08:40.413426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.728 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.423251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.423385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.423411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.728 [2024-07-15 16:08:40.423426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.728 [2024-07-15 16:08:40.423439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.728 [2024-07-15 16:08:40.423466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.728 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.433311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.433510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.433535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.728 [2024-07-15 16:08:40.433549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.728 [2024-07-15 16:08:40.433563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.728 [2024-07-15 16:08:40.433590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.728 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.443357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.443499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.443525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.728 [2024-07-15 16:08:40.443545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.728 [2024-07-15 16:08:40.443559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.728 [2024-07-15 16:08:40.443588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.728 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.453417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.453605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.453637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.728 [2024-07-15 16:08:40.453652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.728 [2024-07-15 16:08:40.453666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.728 [2024-07-15 16:08:40.453694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.728 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.463368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.463559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.463585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.728 [2024-07-15 16:08:40.463599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.728 [2024-07-15 16:08:40.463612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.728 [2024-07-15 16:08:40.463640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.728 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.473413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.473552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.473577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.728 [2024-07-15 16:08:40.473591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.728 [2024-07-15 16:08:40.473604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.728 [2024-07-15 16:08:40.473634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.728 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.483460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.483636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.483662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.728 [2024-07-15 16:08:40.483676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.728 [2024-07-15 16:08:40.483689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.728 [2024-07-15 16:08:40.483717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.728 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.493463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.493593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.493618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.728 [2024-07-15 16:08:40.493633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.728 [2024-07-15 16:08:40.493646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.728 [2024-07-15 16:08:40.493680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.728 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.503492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.503637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.503663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.728 [2024-07-15 16:08:40.503678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.728 [2024-07-15 16:08:40.503692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.728 [2024-07-15 16:08:40.503720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.728 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.513479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.513609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.513635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.728 [2024-07-15 16:08:40.513649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.728 [2024-07-15 16:08:40.513661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.728 [2024-07-15 16:08:40.513689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.728 qpair failed and we were unable to recover it. 00:27:13.728 [2024-07-15 16:08:40.523545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.728 [2024-07-15 16:08:40.523703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.728 [2024-07-15 16:08:40.523728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.523742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.523755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.523782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.533558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.533693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.533719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.533733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.533746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.533774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.543635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.543797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.543828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.543843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.543856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.543890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.553618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.553758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.553785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.553805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.553819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.553849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.563636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.563780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.563805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.563819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.563832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.563860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.573664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.573800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.573826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.573840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.573853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.573888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.583697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.583832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.583859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.583874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.583894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.583929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.593720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.593850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.593882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.593899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.593913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.593941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.603786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.603932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.603958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.603972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.603985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.604013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.613758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.613916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.613941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.613956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.613969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.613997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.623792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.623937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.623961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.623975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.623987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.624015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.633838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.634026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.634058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.634073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.634087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.634116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.643923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.644062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.644088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.644103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.644116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.644145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.729 [2024-07-15 16:08:40.653946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.729 [2024-07-15 16:08:40.654107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.729 [2024-07-15 16:08:40.654132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.729 [2024-07-15 16:08:40.654147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.729 [2024-07-15 16:08:40.654160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.729 [2024-07-15 16:08:40.654190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.729 qpair failed and we were unable to recover it. 00:27:13.988 [2024-07-15 16:08:40.663927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.988 [2024-07-15 16:08:40.664065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.664091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.664106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.664121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.664150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.673967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.674105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.674131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.674146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.674161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.674195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.684027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.684212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.684239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.684254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.684268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.684296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.694005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.694173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.694198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.694213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.694227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.694255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.704038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.704176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.704202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.704227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.704240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.704269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.714069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.714207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.714233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.714247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.714261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.714289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.724121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.724318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.724349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.724365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.724379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.724408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.734200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.734359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.734384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.734399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.734414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.734442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.744154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.744290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.744315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.744331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.744345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.744373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.754181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.754337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.754363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.754378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.754391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.754420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.764262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.764443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.764469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.764483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.764505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.764536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.774253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.774396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.774421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.774436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.774450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.774478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.784326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.784511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.784536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.784551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.784564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.784592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.794295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.794439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.794465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.989 [2024-07-15 16:08:40.794480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.989 [2024-07-15 16:08:40.794494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.989 [2024-07-15 16:08:40.794522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.989 qpair failed and we were unable to recover it. 00:27:13.989 [2024-07-15 16:08:40.804379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.989 [2024-07-15 16:08:40.804552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.989 [2024-07-15 16:08:40.804576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.990 [2024-07-15 16:08:40.804591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.990 [2024-07-15 16:08:40.804604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.990 [2024-07-15 16:08:40.804632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.990 qpair failed and we were unable to recover it. 00:27:13.990 [2024-07-15 16:08:40.814379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.990 [2024-07-15 16:08:40.814545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.990 [2024-07-15 16:08:40.814571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.990 [2024-07-15 16:08:40.814586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.990 [2024-07-15 16:08:40.814599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.990 [2024-07-15 16:08:40.814628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.990 qpair failed and we were unable to recover it. 00:27:13.990 [2024-07-15 16:08:40.824430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.990 [2024-07-15 16:08:40.824566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.990 [2024-07-15 16:08:40.824592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.990 [2024-07-15 16:08:40.824607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.990 [2024-07-15 16:08:40.824620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.990 [2024-07-15 16:08:40.824649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.990 qpair failed and we were unable to recover it. 00:27:13.990 [2024-07-15 16:08:40.834413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.990 [2024-07-15 16:08:40.834597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.990 [2024-07-15 16:08:40.834622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.990 [2024-07-15 16:08:40.834637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.990 [2024-07-15 16:08:40.834651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.990 [2024-07-15 16:08:40.834679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.990 qpair failed and we were unable to recover it. 00:27:13.990 [2024-07-15 16:08:40.844487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.990 [2024-07-15 16:08:40.844638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.990 [2024-07-15 16:08:40.844663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.990 [2024-07-15 16:08:40.844678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.990 [2024-07-15 16:08:40.844691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.990 [2024-07-15 16:08:40.844720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.990 qpair failed and we were unable to recover it. 00:27:13.990 [2024-07-15 16:08:40.854498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.990 [2024-07-15 16:08:40.854641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.990 [2024-07-15 16:08:40.854667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.990 [2024-07-15 16:08:40.854682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.990 [2024-07-15 16:08:40.854704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.990 [2024-07-15 16:08:40.854735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.990 qpair failed and we were unable to recover it. 00:27:13.990 [2024-07-15 16:08:40.864534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.990 [2024-07-15 16:08:40.864668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.990 [2024-07-15 16:08:40.864693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.990 [2024-07-15 16:08:40.864708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.990 [2024-07-15 16:08:40.864721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.990 [2024-07-15 16:08:40.864749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.990 qpair failed and we were unable to recover it. 00:27:13.990 [2024-07-15 16:08:40.874556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.990 [2024-07-15 16:08:40.874696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.990 [2024-07-15 16:08:40.874721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.990 [2024-07-15 16:08:40.874737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.990 [2024-07-15 16:08:40.874750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.990 [2024-07-15 16:08:40.874794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.990 qpair failed and we were unable to recover it. 00:27:13.990 [2024-07-15 16:08:40.884619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.990 [2024-07-15 16:08:40.884794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.990 [2024-07-15 16:08:40.884820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.990 [2024-07-15 16:08:40.884836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.990 [2024-07-15 16:08:40.884850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.990 [2024-07-15 16:08:40.884895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.990 qpair failed and we were unable to recover it. 00:27:13.990 [2024-07-15 16:08:40.894619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.990 [2024-07-15 16:08:40.894763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.990 [2024-07-15 16:08:40.894788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.990 [2024-07-15 16:08:40.894803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.990 [2024-07-15 16:08:40.894818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.990 [2024-07-15 16:08:40.894846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.990 qpair failed and we were unable to recover it. 00:27:13.990 [2024-07-15 16:08:40.904614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.990 [2024-07-15 16:08:40.904776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.990 [2024-07-15 16:08:40.904803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.990 [2024-07-15 16:08:40.904818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.990 [2024-07-15 16:08:40.904832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.990 [2024-07-15 16:08:40.904860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.990 qpair failed and we were unable to recover it. 00:27:13.990 [2024-07-15 16:08:40.914687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.990 [2024-07-15 16:08:40.914854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.990 [2024-07-15 16:08:40.914887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.990 [2024-07-15 16:08:40.914904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.990 [2024-07-15 16:08:40.914929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:13.990 [2024-07-15 16:08:40.914958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.990 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-15 16:08:40.924724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.249 [2024-07-15 16:08:40.924896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.249 [2024-07-15 16:08:40.924924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.249 [2024-07-15 16:08:40.924940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.249 [2024-07-15 16:08:40.924954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.249 [2024-07-15 16:08:40.924984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-15 16:08:40.934703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.249 [2024-07-15 16:08:40.934849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.249 [2024-07-15 16:08:40.934883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.249 [2024-07-15 16:08:40.934902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.249 [2024-07-15 16:08:40.934916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.249 [2024-07-15 16:08:40.934945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-15 16:08:40.944758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.249 [2024-07-15 16:08:40.944897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.249 [2024-07-15 16:08:40.944923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.249 [2024-07-15 16:08:40.944944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.249 [2024-07-15 16:08:40.944959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.249 [2024-07-15 16:08:40.944989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-15 16:08:40.954771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.249 [2024-07-15 16:08:40.954915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.249 [2024-07-15 16:08:40.954941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.249 [2024-07-15 16:08:40.954956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.249 [2024-07-15 16:08:40.954970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.249 [2024-07-15 16:08:40.954999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-15 16:08:40.964840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.249 [2024-07-15 16:08:40.965008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.249 [2024-07-15 16:08:40.965034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.249 [2024-07-15 16:08:40.965049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.249 [2024-07-15 16:08:40.965063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.249 [2024-07-15 16:08:40.965091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-15 16:08:40.974804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.249 [2024-07-15 16:08:40.974949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.249 [2024-07-15 16:08:40.974975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.249 [2024-07-15 16:08:40.974990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.249 [2024-07-15 16:08:40.975004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.249 [2024-07-15 16:08:40.975033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-15 16:08:40.984870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.249 [2024-07-15 16:08:40.985033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.249 [2024-07-15 16:08:40.985059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.249 [2024-07-15 16:08:40.985073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.249 [2024-07-15 16:08:40.985087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.249 [2024-07-15 16:08:40.985115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-15 16:08:40.994892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.249 [2024-07-15 16:08:40.995031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.249 [2024-07-15 16:08:40.995058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.249 [2024-07-15 16:08:40.995073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.249 [2024-07-15 16:08:40.995087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.249 [2024-07-15 16:08:40.995116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-15 16:08:41.004909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.249 [2024-07-15 16:08:41.005054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.249 [2024-07-15 16:08:41.005080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.249 [2024-07-15 16:08:41.005095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.249 [2024-07-15 16:08:41.005109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.249 [2024-07-15 16:08:41.005138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-15 16:08:41.014952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.249 [2024-07-15 16:08:41.015107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.249 [2024-07-15 16:08:41.015133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.249 [2024-07-15 16:08:41.015148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.249 [2024-07-15 16:08:41.015161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.249 [2024-07-15 16:08:41.015189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-15 16:08:41.024967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.249 [2024-07-15 16:08:41.025104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.249 [2024-07-15 16:08:41.025130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.249 [2024-07-15 16:08:41.025144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.249 [2024-07-15 16:08:41.025158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.249 [2024-07-15 16:08:41.025186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-15 16:08:41.034982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.035127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.035153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.035175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.035189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.035218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.045044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.045184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.045210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.045225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.045239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.045270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.055052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.055183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.055209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.055223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.055237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.055266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.065094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.065245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.065274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.065290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.065304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.065349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.075095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.075229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.075255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.075270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.075285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.075313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.085152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.085296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.085322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.085337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.085351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.085380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.095188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.095390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.095430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.095446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.095460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.095502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.105211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.105346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.105372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.105387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.105400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.105429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.115235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.115377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.115404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.115419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.115433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.115462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.125249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.125387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.125413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.125434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.125449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.125478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.135299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.135443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.135469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.135484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.135497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.135526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.145328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.145501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.145527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.145542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.145570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.145598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.155359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.155499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.155524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.155540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.155554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.155582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.165390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.165556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.165582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.165611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.165626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.250 [2024-07-15 16:08:41.165653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-15 16:08:41.175367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.250 [2024-07-15 16:08:41.175505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.250 [2024-07-15 16:08:41.175530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.250 [2024-07-15 16:08:41.175545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.250 [2024-07-15 16:08:41.175560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.251 [2024-07-15 16:08:41.175588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.510 [2024-07-15 16:08:41.185445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.510 [2024-07-15 16:08:41.185588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.510 [2024-07-15 16:08:41.185615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.510 [2024-07-15 16:08:41.185630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.510 [2024-07-15 16:08:41.185644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.510 [2024-07-15 16:08:41.185688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-07-15 16:08:41.195591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.510 [2024-07-15 16:08:41.195777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.510 [2024-07-15 16:08:41.195802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.510 [2024-07-15 16:08:41.195834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.510 [2024-07-15 16:08:41.195847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.510 [2024-07-15 16:08:41.195898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-07-15 16:08:41.205536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.510 [2024-07-15 16:08:41.205683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.510 [2024-07-15 16:08:41.205709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.510 [2024-07-15 16:08:41.205724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.510 [2024-07-15 16:08:41.205737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.510 [2024-07-15 16:08:41.205766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-07-15 16:08:41.215523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.510 [2024-07-15 16:08:41.215694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.510 [2024-07-15 16:08:41.215725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.510 [2024-07-15 16:08:41.215741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.510 [2024-07-15 16:08:41.215755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.510 [2024-07-15 16:08:41.215783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-07-15 16:08:41.225510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.510 [2024-07-15 16:08:41.225643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.510 [2024-07-15 16:08:41.225670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.510 [2024-07-15 16:08:41.225684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.510 [2024-07-15 16:08:41.225698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.510 [2024-07-15 16:08:41.225727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-07-15 16:08:41.235594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.510 [2024-07-15 16:08:41.235739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.510 [2024-07-15 16:08:41.235766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.510 [2024-07-15 16:08:41.235781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.510 [2024-07-15 16:08:41.235794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.510 [2024-07-15 16:08:41.235837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-07-15 16:08:41.245613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.510 [2024-07-15 16:08:41.245793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.510 [2024-07-15 16:08:41.245818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.510 [2024-07-15 16:08:41.245833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.510 [2024-07-15 16:08:41.245847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.510 [2024-07-15 16:08:41.245884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-07-15 16:08:41.255609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.510 [2024-07-15 16:08:41.255801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.510 [2024-07-15 16:08:41.255827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.255842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.255855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.255896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.265630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.265777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.265803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.265818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.265832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.265860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.275678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.275860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.275894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.275910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.275924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.275953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.285718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.285859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.285895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.285911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.285925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.285954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.295737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.295925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.295950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.295965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.295979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.296008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.305749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.305909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.305939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.305955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.305969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.305998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.315764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.315913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.315941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.315961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.315975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.316005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.325813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.325995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.326021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.326037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.326051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.326080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.335855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.336025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.336052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.336067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.336081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.336109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.345949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.346095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.346121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.346136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.346150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.346185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.355926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.356082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.356108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.356123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.356137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.356181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.365954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.366127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.366153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.366168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.366182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.366210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.375953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.376094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.376120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.376134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.376148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.376177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.386003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.386139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.386165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.511 [2024-07-15 16:08:41.386179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.511 [2024-07-15 16:08:41.386193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.511 [2024-07-15 16:08:41.386221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-07-15 16:08:41.396042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.511 [2024-07-15 16:08:41.396182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.511 [2024-07-15 16:08:41.396215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.512 [2024-07-15 16:08:41.396236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.512 [2024-07-15 16:08:41.396249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.512 [2024-07-15 16:08:41.396294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-07-15 16:08:41.406054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.512 [2024-07-15 16:08:41.406188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.512 [2024-07-15 16:08:41.406215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.512 [2024-07-15 16:08:41.406230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.512 [2024-07-15 16:08:41.406243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.512 [2024-07-15 16:08:41.406271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-07-15 16:08:41.416091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.512 [2024-07-15 16:08:41.416234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.512 [2024-07-15 16:08:41.416261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.512 [2024-07-15 16:08:41.416277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.512 [2024-07-15 16:08:41.416290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.512 [2024-07-15 16:08:41.416334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-07-15 16:08:41.426091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.512 [2024-07-15 16:08:41.426261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.512 [2024-07-15 16:08:41.426287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.512 [2024-07-15 16:08:41.426302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.512 [2024-07-15 16:08:41.426316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.512 [2024-07-15 16:08:41.426345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-07-15 16:08:41.436178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.512 [2024-07-15 16:08:41.436343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.512 [2024-07-15 16:08:41.436369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.512 [2024-07-15 16:08:41.436384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.512 [2024-07-15 16:08:41.436398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.512 [2024-07-15 16:08:41.436433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.773 [2024-07-15 16:08:41.446185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.773 [2024-07-15 16:08:41.446320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.773 [2024-07-15 16:08:41.446346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.773 [2024-07-15 16:08:41.446362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.773 [2024-07-15 16:08:41.446375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.773 [2024-07-15 16:08:41.446420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.773 qpair failed and we were unable to recover it. 00:27:14.773 [2024-07-15 16:08:41.456188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.773 [2024-07-15 16:08:41.456322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.774 [2024-07-15 16:08:41.456347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.774 [2024-07-15 16:08:41.456362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.774 [2024-07-15 16:08:41.456376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.774 [2024-07-15 16:08:41.456404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.774 qpair failed and we were unable to recover it. 00:27:14.774 [2024-07-15 16:08:41.466231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.774 [2024-07-15 16:08:41.466369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.774 [2024-07-15 16:08:41.466396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.774 [2024-07-15 16:08:41.466411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.774 [2024-07-15 16:08:41.466425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.774 [2024-07-15 16:08:41.466454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.774 qpair failed and we were unable to recover it. 00:27:14.774 [2024-07-15 16:08:41.476228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.774 [2024-07-15 16:08:41.476361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.774 [2024-07-15 16:08:41.476386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.774 [2024-07-15 16:08:41.476402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.774 [2024-07-15 16:08:41.476415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.774 [2024-07-15 16:08:41.476444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.774 qpair failed and we were unable to recover it. 00:27:14.774 [2024-07-15 16:08:41.486259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.774 [2024-07-15 16:08:41.486389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.774 [2024-07-15 16:08:41.486422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.774 [2024-07-15 16:08:41.486437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.774 [2024-07-15 16:08:41.486458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.774 [2024-07-15 16:08:41.486487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.774 qpair failed and we were unable to recover it. 00:27:14.774 [2024-07-15 16:08:41.496317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.774 [2024-07-15 16:08:41.496449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.774 [2024-07-15 16:08:41.496475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.774 [2024-07-15 16:08:41.496490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.774 [2024-07-15 16:08:41.496504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.774 [2024-07-15 16:08:41.496548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.774 qpair failed and we were unable to recover it. 00:27:14.774 [2024-07-15 16:08:41.506306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.774 [2024-07-15 16:08:41.506439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.774 [2024-07-15 16:08:41.506465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.774 [2024-07-15 16:08:41.506481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.774 [2024-07-15 16:08:41.506493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.774 [2024-07-15 16:08:41.506522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.774 qpair failed and we were unable to recover it. 00:27:14.774 [2024-07-15 16:08:41.516332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.774 [2024-07-15 16:08:41.516468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.774 [2024-07-15 16:08:41.516494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.774 [2024-07-15 16:08:41.516509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.774 [2024-07-15 16:08:41.516522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.774 [2024-07-15 16:08:41.516550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.774 qpair failed and we were unable to recover it. 00:27:14.774 [2024-07-15 16:08:41.526375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.774 [2024-07-15 16:08:41.526515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.774 [2024-07-15 16:08:41.526541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.774 [2024-07-15 16:08:41.526557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.774 [2024-07-15 16:08:41.526580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.774 [2024-07-15 16:08:41.526610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.774 qpair failed and we were unable to recover it. 00:27:14.774 [2024-07-15 16:08:41.536494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.774 [2024-07-15 16:08:41.536649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.774 [2024-07-15 16:08:41.536675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.774 [2024-07-15 16:08:41.536690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.774 [2024-07-15 16:08:41.536703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.774 [2024-07-15 16:08:41.536747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.774 qpair failed and we were unable to recover it. 00:27:14.774 [2024-07-15 16:08:41.546452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.774 [2024-07-15 16:08:41.546582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.774 [2024-07-15 16:08:41.546608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.774 [2024-07-15 16:08:41.546623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.774 [2024-07-15 16:08:41.546637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.774 [2024-07-15 16:08:41.546665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.774 qpair failed and we were unable to recover it. 00:27:14.774 [2024-07-15 16:08:41.556485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.774 [2024-07-15 16:08:41.556661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.774 [2024-07-15 16:08:41.556687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.774 [2024-07-15 16:08:41.556703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.774 [2024-07-15 16:08:41.556726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.774 [2024-07-15 16:08:41.556755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.774 qpair failed and we were unable to recover it. 00:27:14.774 [2024-07-15 16:08:41.566549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.774 [2024-07-15 16:08:41.566689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.774 [2024-07-15 16:08:41.566715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.774 [2024-07-15 16:08:41.566730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.774 [2024-07-15 16:08:41.566743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.775 [2024-07-15 16:08:41.566786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.775 qpair failed and we were unable to recover it. 00:27:14.775 [2024-07-15 16:08:41.576520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.775 [2024-07-15 16:08:41.576703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.775 [2024-07-15 16:08:41.576729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.775 [2024-07-15 16:08:41.576760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.775 [2024-07-15 16:08:41.576774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.775 [2024-07-15 16:08:41.576802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.775 qpair failed and we were unable to recover it. 00:27:14.775 [2024-07-15 16:08:41.586593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.775 [2024-07-15 16:08:41.586752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.775 [2024-07-15 16:08:41.586779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.775 [2024-07-15 16:08:41.586794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.775 [2024-07-15 16:08:41.586807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.775 [2024-07-15 16:08:41.586836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.775 qpair failed and we were unable to recover it. 00:27:14.775 [2024-07-15 16:08:41.596588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.775 [2024-07-15 16:08:41.596719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.775 [2024-07-15 16:08:41.596744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.775 [2024-07-15 16:08:41.596759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.775 [2024-07-15 16:08:41.596773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.775 [2024-07-15 16:08:41.596816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.775 qpair failed and we were unable to recover it. 00:27:14.775 [2024-07-15 16:08:41.606626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.775 [2024-07-15 16:08:41.606763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.775 [2024-07-15 16:08:41.606789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.775 [2024-07-15 16:08:41.606804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.775 [2024-07-15 16:08:41.606817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.775 [2024-07-15 16:08:41.606846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.775 qpair failed and we were unable to recover it. 00:27:14.775 [2024-07-15 16:08:41.616688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.775 [2024-07-15 16:08:41.616846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.775 [2024-07-15 16:08:41.616872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.775 [2024-07-15 16:08:41.616948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.775 [2024-07-15 16:08:41.616970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.775 [2024-07-15 16:08:41.617002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.775 qpair failed and we were unable to recover it. 00:27:14.775 [2024-07-15 16:08:41.626662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.775 [2024-07-15 16:08:41.626801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.775 [2024-07-15 16:08:41.626827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.775 [2024-07-15 16:08:41.626842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.775 [2024-07-15 16:08:41.626856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.775 [2024-07-15 16:08:41.626891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.775 qpair failed and we were unable to recover it. 00:27:14.775 [2024-07-15 16:08:41.636674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.775 [2024-07-15 16:08:41.636802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.775 [2024-07-15 16:08:41.636828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.775 [2024-07-15 16:08:41.636842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.775 [2024-07-15 16:08:41.636855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.775 [2024-07-15 16:08:41.636892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.775 qpair failed and we were unable to recover it. 00:27:14.775 [2024-07-15 16:08:41.646713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.775 [2024-07-15 16:08:41.646865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.775 [2024-07-15 16:08:41.646903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.775 [2024-07-15 16:08:41.646920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.775 [2024-07-15 16:08:41.646933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.775 [2024-07-15 16:08:41.646967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.775 qpair failed and we were unable to recover it. 00:27:14.775 [2024-07-15 16:08:41.656732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.775 [2024-07-15 16:08:41.656868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.775 [2024-07-15 16:08:41.656901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.775 [2024-07-15 16:08:41.656927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.775 [2024-07-15 16:08:41.656940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.775 [2024-07-15 16:08:41.656969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.775 qpair failed and we were unable to recover it. 00:27:14.775 [2024-07-15 16:08:41.666793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.775 [2024-07-15 16:08:41.666962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.775 [2024-07-15 16:08:41.666988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.775 [2024-07-15 16:08:41.667004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.775 [2024-07-15 16:08:41.667017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.775 [2024-07-15 16:08:41.667046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.775 qpair failed and we were unable to recover it. 00:27:14.775 [2024-07-15 16:08:41.676854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.775 [2024-07-15 16:08:41.677015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.775 [2024-07-15 16:08:41.677041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.775 [2024-07-15 16:08:41.677057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.775 [2024-07-15 16:08:41.677070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.775 [2024-07-15 16:08:41.677099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.775 qpair failed and we were unable to recover it. 00:27:14.775 [2024-07-15 16:08:41.686933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.775 [2024-07-15 16:08:41.687074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.776 [2024-07-15 16:08:41.687100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.776 [2024-07-15 16:08:41.687116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.776 [2024-07-15 16:08:41.687130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.776 [2024-07-15 16:08:41.687157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.776 qpair failed and we were unable to recover it. 00:27:14.776 [2024-07-15 16:08:41.696893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.776 [2024-07-15 16:08:41.697051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.776 [2024-07-15 16:08:41.697078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.776 [2024-07-15 16:08:41.697093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.776 [2024-07-15 16:08:41.697106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:14.776 [2024-07-15 16:08:41.697135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.776 qpair failed and we were unable to recover it. 00:27:15.036 [2024-07-15 16:08:41.706901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.036 [2024-07-15 16:08:41.707088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.036 [2024-07-15 16:08:41.707115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.036 [2024-07-15 16:08:41.707136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.036 [2024-07-15 16:08:41.707150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.036 [2024-07-15 16:08:41.707180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-07-15 16:08:41.716939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.036 [2024-07-15 16:08:41.717097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.036 [2024-07-15 16:08:41.717123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.036 [2024-07-15 16:08:41.717138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.036 [2024-07-15 16:08:41.717152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.036 [2024-07-15 16:08:41.717182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-07-15 16:08:41.726987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.036 [2024-07-15 16:08:41.727146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.036 [2024-07-15 16:08:41.727172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.036 [2024-07-15 16:08:41.727188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.036 [2024-07-15 16:08:41.727202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.036 [2024-07-15 16:08:41.727230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-07-15 16:08:41.737009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.036 [2024-07-15 16:08:41.737146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.036 [2024-07-15 16:08:41.737173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.036 [2024-07-15 16:08:41.737188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.036 [2024-07-15 16:08:41.737202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.036 [2024-07-15 16:08:41.737244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-07-15 16:08:41.747017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.036 [2024-07-15 16:08:41.747171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.037 [2024-07-15 16:08:41.747198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.037 [2024-07-15 16:08:41.747213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.037 [2024-07-15 16:08:41.747226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.037 [2024-07-15 16:08:41.747270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-07-15 16:08:41.757069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.037 [2024-07-15 16:08:41.757201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.037 [2024-07-15 16:08:41.757228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.037 [2024-07-15 16:08:41.757243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.037 [2024-07-15 16:08:41.757257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.037 [2024-07-15 16:08:41.757285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-07-15 16:08:41.767158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.037 [2024-07-15 16:08:41.767299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.037 [2024-07-15 16:08:41.767326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.037 [2024-07-15 16:08:41.767341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.037 [2024-07-15 16:08:41.767355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.037 [2024-07-15 16:08:41.767384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-07-15 16:08:41.777128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.037 [2024-07-15 16:08:41.777259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.037 [2024-07-15 16:08:41.777283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.037 [2024-07-15 16:08:41.777298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.037 [2024-07-15 16:08:41.777311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.037 [2024-07-15 16:08:41.777341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-07-15 16:08:41.787152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.037 [2024-07-15 16:08:41.787280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.037 [2024-07-15 16:08:41.787307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.037 [2024-07-15 16:08:41.787323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.037 [2024-07-15 16:08:41.787336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.037 [2024-07-15 16:08:41.787365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-07-15 16:08:41.797162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.037 [2024-07-15 16:08:41.797295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.037 [2024-07-15 16:08:41.797321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.037 [2024-07-15 16:08:41.797342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.037 [2024-07-15 16:08:41.797356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.037 [2024-07-15 16:08:41.797400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-07-15 16:08:41.807219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.037 [2024-07-15 16:08:41.807363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.037 [2024-07-15 16:08:41.807387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.037 [2024-07-15 16:08:41.807402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.037 [2024-07-15 16:08:41.807414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.037 [2024-07-15 16:08:41.807457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-07-15 16:08:41.817229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.037 [2024-07-15 16:08:41.817369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.037 [2024-07-15 16:08:41.817395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.037 [2024-07-15 16:08:41.817411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.037 [2024-07-15 16:08:41.817425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.037 [2024-07-15 16:08:41.817454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-07-15 16:08:41.827237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.037 [2024-07-15 16:08:41.827364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.037 [2024-07-15 16:08:41.827391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.037 [2024-07-15 16:08:41.827406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.037 [2024-07-15 16:08:41.827421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.037 [2024-07-15 16:08:41.827450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-07-15 16:08:41.837297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.037 [2024-07-15 16:08:41.837427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.037 [2024-07-15 16:08:41.837453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.037 [2024-07-15 16:08:41.837468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.037 [2024-07-15 16:08:41.837483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.037 [2024-07-15 16:08:41.837512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-07-15 16:08:41.847320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.037 [2024-07-15 16:08:41.847454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.037 [2024-07-15 16:08:41.847480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.037 [2024-07-15 16:08:41.847495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.037 [2024-07-15 16:08:41.847508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.037 [2024-07-15 16:08:41.847538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-07-15 16:08:41.857358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.037 [2024-07-15 16:08:41.857507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.037 [2024-07-15 16:08:41.857533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.038 [2024-07-15 16:08:41.857548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.038 [2024-07-15 16:08:41.857562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.038 [2024-07-15 16:08:41.857606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-07-15 16:08:41.867458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.038 [2024-07-15 16:08:41.867599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.038 [2024-07-15 16:08:41.867626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.038 [2024-07-15 16:08:41.867641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.038 [2024-07-15 16:08:41.867655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.038 [2024-07-15 16:08:41.867698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-07-15 16:08:41.877403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.038 [2024-07-15 16:08:41.877530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.038 [2024-07-15 16:08:41.877557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.038 [2024-07-15 16:08:41.877572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.038 [2024-07-15 16:08:41.877585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.038 [2024-07-15 16:08:41.877614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-07-15 16:08:41.887439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.038 [2024-07-15 16:08:41.887584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.038 [2024-07-15 16:08:41.887611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.038 [2024-07-15 16:08:41.887632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.038 [2024-07-15 16:08:41.887646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.038 [2024-07-15 16:08:41.887690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-07-15 16:08:41.897472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.038 [2024-07-15 16:08:41.897604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.038 [2024-07-15 16:08:41.897629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.038 [2024-07-15 16:08:41.897645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.038 [2024-07-15 16:08:41.897658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.038 [2024-07-15 16:08:41.897689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-07-15 16:08:41.907516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.038 [2024-07-15 16:08:41.907691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.038 [2024-07-15 16:08:41.907717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.038 [2024-07-15 16:08:41.907731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.038 [2024-07-15 16:08:41.907744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.038 [2024-07-15 16:08:41.907774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-07-15 16:08:41.917506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.038 [2024-07-15 16:08:41.917637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.038 [2024-07-15 16:08:41.917663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.038 [2024-07-15 16:08:41.917679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.038 [2024-07-15 16:08:41.917692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.038 [2024-07-15 16:08:41.917722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-07-15 16:08:41.927555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.038 [2024-07-15 16:08:41.927694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.038 [2024-07-15 16:08:41.927720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.038 [2024-07-15 16:08:41.927735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.038 [2024-07-15 16:08:41.927749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.038 [2024-07-15 16:08:41.927778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-07-15 16:08:41.937654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.038 [2024-07-15 16:08:41.937801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.038 [2024-07-15 16:08:41.937827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.038 [2024-07-15 16:08:41.937843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.038 [2024-07-15 16:08:41.937856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.038 [2024-07-15 16:08:41.937904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-07-15 16:08:41.947618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.038 [2024-07-15 16:08:41.947749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.038 [2024-07-15 16:08:41.947777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.038 [2024-07-15 16:08:41.947792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.038 [2024-07-15 16:08:41.947806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.038 [2024-07-15 16:08:41.947834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-07-15 16:08:41.957714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.038 [2024-07-15 16:08:41.957847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.038 [2024-07-15 16:08:41.957874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.038 [2024-07-15 16:08:41.957897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.038 [2024-07-15 16:08:41.957917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.038 [2024-07-15 16:08:41.957945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.299 [2024-07-15 16:08:41.967679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.299 [2024-07-15 16:08:41.967822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.299 [2024-07-15 16:08:41.967848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.299 [2024-07-15 16:08:41.967863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.299 [2024-07-15 16:08:41.967883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.300 [2024-07-15 16:08:41.967917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.300 qpair failed and we were unable to recover it. 00:27:15.300 [2024-07-15 16:08:41.977708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.300 [2024-07-15 16:08:41.977871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.300 [2024-07-15 16:08:41.977921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.300 [2024-07-15 16:08:41.977937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.300 [2024-07-15 16:08:41.977951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.300 [2024-07-15 16:08:41.977980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.300 qpair failed and we were unable to recover it. 00:27:15.300 [2024-07-15 16:08:41.987696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.300 [2024-07-15 16:08:41.987827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.300 [2024-07-15 16:08:41.987853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.300 [2024-07-15 16:08:41.987869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.300 [2024-07-15 16:08:41.987890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.300 [2024-07-15 16:08:41.987919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.300 qpair failed and we were unable to recover it. 00:27:15.300 [2024-07-15 16:08:41.997769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.300 [2024-07-15 16:08:41.997928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.300 [2024-07-15 16:08:41.997954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.300 [2024-07-15 16:08:41.997969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.300 [2024-07-15 16:08:41.997983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.300 [2024-07-15 16:08:41.998011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.300 qpair failed and we were unable to recover it. 00:27:15.300 [2024-07-15 16:08:42.007791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.300 [2024-07-15 16:08:42.007958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.300 [2024-07-15 16:08:42.007987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.300 [2024-07-15 16:08:42.008003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.300 [2024-07-15 16:08:42.008020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.300 [2024-07-15 16:08:42.008051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.300 qpair failed and we were unable to recover it. 00:27:15.300 [2024-07-15 16:08:42.017795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.300 [2024-07-15 16:08:42.017929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.300 [2024-07-15 16:08:42.017954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.300 [2024-07-15 16:08:42.017970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.300 [2024-07-15 16:08:42.017983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.300 [2024-07-15 16:08:42.018012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.300 qpair failed and we were unable to recover it. 00:27:15.300 [2024-07-15 16:08:42.027839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.300 [2024-07-15 16:08:42.027977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.300 [2024-07-15 16:08:42.028004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.300 [2024-07-15 16:08:42.028019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.300 [2024-07-15 16:08:42.028033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.300 [2024-07-15 16:08:42.028062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.300 qpair failed and we were unable to recover it. 00:27:15.300 [2024-07-15 16:08:42.037871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.300 [2024-07-15 16:08:42.038030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.300 [2024-07-15 16:08:42.038061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.300 [2024-07-15 16:08:42.038076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.300 [2024-07-15 16:08:42.038090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.300 [2024-07-15 16:08:42.038121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.300 qpair failed and we were unable to recover it. 00:27:15.300 [2024-07-15 16:08:42.047910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.300 [2024-07-15 16:08:42.048052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.300 [2024-07-15 16:08:42.048080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.300 [2024-07-15 16:08:42.048095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.300 [2024-07-15 16:08:42.048109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10e6200 00:27:15.300 [2024-07-15 16:08:42.048139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.300 qpair failed and we were unable to recover it. 00:27:15.300 [2024-07-15 16:08:42.057928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.300 [2024-07-15 16:08:42.058062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.300 [2024-07-15 16:08:42.058096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.300 [2024-07-15 16:08:42.058113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.300 [2024-07-15 16:08:42.058126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1700000b90 00:27:15.300 [2024-07-15 16:08:42.058157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.300 qpair failed and we were unable to recover it. 00:27:15.300 [2024-07-15 16:08:42.067972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.300 [2024-07-15 16:08:42.068127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.300 [2024-07-15 16:08:42.068161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.300 [2024-07-15 16:08:42.068179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.300 [2024-07-15 16:08:42.068193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1700000b90 00:27:15.300 [2024-07-15 16:08:42.068224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.300 qpair failed and we were unable to recover it. 00:27:15.300 [2024-07-15 16:08:42.068359] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:15.300 A controller has encountered a failure and is being reset. 00:27:15.300 Controller properly reset. 00:27:15.300 Initializing NVMe Controllers 00:27:15.300 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:15.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:15.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:15.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:15.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:15.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:15.300 Initialization complete. Launching workers. 00:27:15.300 Starting thread on core 1 00:27:15.300 Starting thread on core 2 00:27:15.300 Starting thread on core 3 00:27:15.300 Starting thread on core 0 00:27:15.300 16:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:15.300 00:27:15.300 real 0m10.820s 00:27:15.300 user 0m17.253s 00:27:15.300 sys 0m5.592s 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.301 ************************************ 00:27:15.301 END TEST nvmf_target_disconnect_tc2 00:27:15.301 ************************************ 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:15.301 rmmod nvme_tcp 00:27:15.301 rmmod nvme_fabrics 00:27:15.301 rmmod nvme_keyring 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1263144 ']' 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1263144 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1263144 ']' 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1263144 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1263144 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1263144' 00:27:15.301 killing process with pid 1263144 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1263144 00:27:15.301 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1263144 00:27:15.870 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:15.870 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:15.870 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:15.870 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:15.870 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:15.870 16:08:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.870 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.870 16:08:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.810 16:08:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:17.810 00:27:17.810 real 0m15.576s 00:27:17.810 user 0m43.548s 00:27:17.810 sys 0m7.498s 00:27:17.810 16:08:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:17.810 16:08:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:17.810 ************************************ 00:27:17.810 END TEST nvmf_target_disconnect 00:27:17.810 ************************************ 00:27:17.810 16:08:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:17.810 16:08:44 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:17.810 16:08:44 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.810 16:08:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.810 16:08:44 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:17.810 00:27:17.810 real 19m34.424s 00:27:17.810 user 46m22.844s 00:27:17.810 sys 4m52.768s 00:27:17.810 16:08:44 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:17.810 16:08:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.810 ************************************ 00:27:17.810 END TEST nvmf_tcp 00:27:17.810 ************************************ 00:27:17.810 16:08:44 -- common/autotest_common.sh@1142 -- # return 0 00:27:17.810 16:08:44 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:17.810 16:08:44 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:17.810 16:08:44 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:17.810 16:08:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.810 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:27:17.810 ************************************ 00:27:17.810 START TEST spdkcli_nvmf_tcp 00:27:17.810 ************************************ 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:17.810 * Looking for test storage... 00:27:17.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:17.810 16:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1264347 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1264347 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1264347 ']' 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:18.068 16:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.068 [2024-07-15 16:08:44.778464] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:18.068 [2024-07-15 16:08:44.778541] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264347 ] 00:27:18.068 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.068 [2024-07-15 16:08:44.835530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:18.068 [2024-07-15 16:08:44.946904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.068 [2024-07-15 16:08:44.946928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.326 16:08:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:18.326 16:08:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:27:18.326 16:08:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:18.326 16:08:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:18.326 16:08:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.326 16:08:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:18.326 16:08:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:18.326 16:08:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:18.326 16:08:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.326 16:08:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.326 16:08:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:18.326 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:18.326 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:18.326 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:18.326 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:18.326 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:18.326 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:18.326 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:18.326 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:18.326 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:18.326 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:18.326 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:18.326 ' 00:27:20.868 [2024-07-15 16:08:47.627478] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.245 [2024-07-15 16:08:48.867809] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:24.783 [2024-07-15 16:08:51.147061] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:26.686 [2024-07-15 16:08:53.093240] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:28.066 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:28.066 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:28.066 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:28.066 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:28.066 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:28.066 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:28.066 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:28.066 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:28.066 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:28.066 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:28.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:28.066 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:28.066 16:08:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:28.066 16:08:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:28.066 16:08:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:28.066 16:08:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:28.066 16:08:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:28.066 16:08:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:28.066 16:08:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:28.066 16:08:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:28.324 16:08:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:28.324 16:08:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:28.324 16:08:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:28.324 16:08:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:28.324 16:08:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:28.324 16:08:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:28.324 16:08:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:28.324 16:08:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:28.324 16:08:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:28.324 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:28.324 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:28.324 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:28.324 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:28.324 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:28.324 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:28.324 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:28.324 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:28.324 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:28.324 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:28.324 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:28.324 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:28.324 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:28.324 ' 00:27:33.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:33.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:33.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:33.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:33.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:33.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:33.600 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:33.600 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:33.600 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:33.600 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:33.600 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:33.600 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:33.600 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:33.600 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:33.600 16:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:33.600 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:33.600 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.600 16:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1264347 00:27:33.600 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1264347 ']' 00:27:33.600 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1264347 00:27:33.600 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:33.600 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:33.600 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1264347 00:27:33.858 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:33.858 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:33.858 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1264347' 00:27:33.858 killing process with pid 1264347 00:27:33.858 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1264347 00:27:33.858 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1264347 00:27:34.117 16:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:34.117 16:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:34.117 16:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1264347 ']' 00:27:34.117 16:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1264347 00:27:34.117 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1264347 ']' 00:27:34.118 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1264347 00:27:34.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1264347) - No such process 00:27:34.118 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1264347 is not found' 00:27:34.118 Process with pid 1264347 is not found 00:27:34.118 16:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:34.118 16:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:34.118 16:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:34.118 00:27:34.118 real 0m16.147s 00:27:34.118 user 0m34.164s 00:27:34.118 sys 0m0.830s 00:27:34.118 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.118 16:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:34.118 ************************************ 00:27:34.118 END TEST spdkcli_nvmf_tcp 00:27:34.118 ************************************ 00:27:34.118 16:09:00 -- common/autotest_common.sh@1142 -- # return 0 00:27:34.118 16:09:00 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:34.118 16:09:00 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:34.118 16:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.118 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:27:34.118 ************************************ 00:27:34.118 START TEST nvmf_identify_passthru 00:27:34.118 ************************************ 00:27:34.118 16:09:00 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:34.118 * Looking for test storage... 00:27:34.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:34.118 16:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.118 16:09:00 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.118 16:09:00 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.118 16:09:00 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.118 16:09:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.118 16:09:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.118 16:09:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.118 16:09:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:34.118 16:09:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:34.118 16:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.118 16:09:00 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.118 16:09:00 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.118 16:09:00 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.118 16:09:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.118 16:09:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.118 16:09:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.118 16:09:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:34.118 16:09:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.118 16:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.118 16:09:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:34.118 16:09:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:34.118 16:09:00 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:34.118 16:09:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:36.023 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:36.023 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:36.023 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:36.023 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.023 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.284 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.284 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:36.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:27:36.284 00:27:36.284 --- 10.0.0.2 ping statistics --- 00:27:36.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.284 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:27:36.284 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:27:36.284 00:27:36.284 --- 10.0.0.1 ping statistics --- 00:27:36.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.284 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:27:36.284 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.284 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:36.284 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:36.284 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.284 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:36.284 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:36.284 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.284 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:36.284 16:09:02 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:36.284 16:09:02 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:36.284 16:09:02 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:36.284 16:09:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:36.284 16:09:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:36.284 16:09:02 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:36.284 16:09:02 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:36.284 16:09:02 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:36.284 16:09:03 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:36.284 16:09:03 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:36.284 16:09:03 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:36.284 16:09:03 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:36.284 16:09:03 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:36.284 16:09:03 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:36.284 16:09:03 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:36.284 16:09:03 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:27:36.284 16:09:03 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:27:36.284 16:09:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:27:36.284 16:09:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:27:36.284 16:09:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:36.284 16:09:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:36.284 16:09:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:36.284 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.473 16:09:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:27:40.473 16:09:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:40.473 16:09:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:40.473 16:09:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:40.473 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.727 16:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:44.727 16:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:44.727 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:44.727 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:44.727 16:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:44.727 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:44.727 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:44.727 16:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1269577 00:27:44.728 16:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:44.728 16:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:44.728 16:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1269577 00:27:44.728 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1269577 ']' 00:27:44.728 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.728 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.728 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.728 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.728 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:44.728 [2024-07-15 16:09:11.544813] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:44.728 [2024-07-15 16:09:11.544942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.728 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.728 [2024-07-15 16:09:11.609510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:44.986 [2024-07-15 16:09:11.721052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.986 [2024-07-15 16:09:11.721112] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.986 [2024-07-15 16:09:11.721140] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.986 [2024-07-15 16:09:11.721152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.986 [2024-07-15 16:09:11.721171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.986 [2024-07-15 16:09:11.721301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.986 [2024-07-15 16:09:11.721363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.986 [2024-07-15 16:09:11.721433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.986 [2024-07-15 16:09:11.721435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:27:44.986 16:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:44.986 INFO: Log level set to 20 00:27:44.986 INFO: Requests: 00:27:44.986 { 00:27:44.986 "jsonrpc": "2.0", 00:27:44.986 "method": "nvmf_set_config", 00:27:44.986 "id": 1, 00:27:44.986 "params": { 00:27:44.986 "admin_cmd_passthru": { 00:27:44.986 "identify_ctrlr": true 00:27:44.986 } 00:27:44.986 } 00:27:44.986 } 00:27:44.986 00:27:44.986 INFO: response: 00:27:44.986 { 00:27:44.986 "jsonrpc": "2.0", 00:27:44.986 "id": 1, 00:27:44.986 "result": true 00:27:44.986 } 00:27:44.986 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.986 16:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:44.986 INFO: Setting log level to 20 00:27:44.986 INFO: Setting log level to 20 00:27:44.986 INFO: Log level set to 20 00:27:44.986 INFO: Log level set to 20 00:27:44.986 INFO: Requests: 00:27:44.986 { 00:27:44.986 "jsonrpc": "2.0", 00:27:44.986 "method": "framework_start_init", 00:27:44.986 "id": 1 00:27:44.986 } 00:27:44.986 00:27:44.986 INFO: Requests: 00:27:44.986 { 00:27:44.986 "jsonrpc": "2.0", 00:27:44.986 "method": "framework_start_init", 00:27:44.986 "id": 1 00:27:44.986 } 00:27:44.986 00:27:44.986 [2024-07-15 16:09:11.861275] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:44.986 INFO: response: 00:27:44.986 { 00:27:44.986 "jsonrpc": "2.0", 00:27:44.986 "id": 1, 00:27:44.986 "result": true 00:27:44.986 } 00:27:44.986 00:27:44.986 INFO: response: 00:27:44.986 { 00:27:44.986 "jsonrpc": "2.0", 00:27:44.986 "id": 1, 00:27:44.986 "result": true 00:27:44.986 } 00:27:44.986 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.986 16:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:44.986 INFO: Setting log level to 40 00:27:44.986 INFO: Setting log level to 40 00:27:44.986 INFO: Setting log level to 40 00:27:44.986 [2024-07-15 16:09:11.871350] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.986 16:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:44.986 16:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.986 16:09:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.272 Nvme0n1 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.272 16:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.272 16:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.272 16:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.272 [2024-07-15 16:09:14.765951] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.272 16:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.272 [ 00:27:48.272 { 00:27:48.272 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:48.272 "subtype": "Discovery", 00:27:48.272 "listen_addresses": [], 00:27:48.272 "allow_any_host": true, 00:27:48.272 "hosts": [] 00:27:48.272 }, 00:27:48.272 { 00:27:48.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:48.272 "subtype": "NVMe", 00:27:48.272 "listen_addresses": [ 00:27:48.272 { 00:27:48.272 "trtype": "TCP", 00:27:48.272 "adrfam": "IPv4", 00:27:48.272 "traddr": "10.0.0.2", 00:27:48.272 "trsvcid": "4420" 00:27:48.272 } 00:27:48.272 ], 00:27:48.272 "allow_any_host": true, 00:27:48.272 "hosts": [], 00:27:48.272 "serial_number": "SPDK00000000000001", 00:27:48.272 "model_number": "SPDK bdev Controller", 00:27:48.272 "max_namespaces": 1, 00:27:48.272 "min_cntlid": 1, 00:27:48.272 "max_cntlid": 65519, 00:27:48.272 "namespaces": [ 00:27:48.272 { 00:27:48.272 "nsid": 1, 00:27:48.272 "bdev_name": "Nvme0n1", 00:27:48.272 "name": "Nvme0n1", 00:27:48.272 "nguid": "5E551EED7B1A439F86226F5ED5F10171", 00:27:48.272 "uuid": "5e551eed-7b1a-439f-8622-6f5ed5f10171" 00:27:48.272 } 00:27:48.272 ] 00:27:48.272 } 00:27:48.272 ] 00:27:48.272 16:09:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.272 16:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:48.272 16:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:48.272 16:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:48.272 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.272 16:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:27:48.272 16:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:48.272 16:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:48.272 16:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:48.272 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.533 16:09:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:48.533 16:09:15 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:27:48.533 16:09:15 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:48.533 16:09:15 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 16:09:15 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:48.533 16:09:15 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:48.533 16:09:15 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:48.533 16:09:15 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:48.533 16:09:15 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:48.533 16:09:15 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:48.533 16:09:15 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.533 16:09:15 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.533 rmmod nvme_tcp 00:27:48.533 rmmod nvme_fabrics 00:27:48.533 rmmod nvme_keyring 00:27:48.533 16:09:15 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.533 16:09:15 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:48.533 16:09:15 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:48.533 16:09:15 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1269577 ']' 00:27:48.533 16:09:15 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1269577 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1269577 ']' 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1269577 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1269577 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1269577' 00:27:48.533 killing process with pid 1269577 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1269577 00:27:48.533 16:09:15 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1269577 00:27:50.451 16:09:16 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:50.451 16:09:16 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:50.451 16:09:16 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:50.451 16:09:16 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:50.451 16:09:16 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:50.451 16:09:16 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.451 16:09:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:50.451 16:09:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.353 16:09:18 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:52.353 00:27:52.353 real 0m18.070s 00:27:52.353 user 0m26.910s 00:27:52.353 sys 0m2.327s 00:27:52.353 16:09:18 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.353 16:09:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:52.353 ************************************ 00:27:52.353 END TEST nvmf_identify_passthru 00:27:52.353 ************************************ 00:27:52.353 16:09:18 -- common/autotest_common.sh@1142 -- # return 0 00:27:52.353 16:09:18 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:52.353 16:09:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:52.353 16:09:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.353 16:09:18 -- common/autotest_common.sh@10 -- # set +x 00:27:52.353 ************************************ 00:27:52.353 START TEST nvmf_dif 00:27:52.353 ************************************ 00:27:52.353 16:09:18 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:52.353 * Looking for test storage... 00:27:52.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:52.353 16:09:19 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.353 16:09:19 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.353 16:09:19 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.353 16:09:19 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.353 16:09:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.353 16:09:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.353 16:09:19 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.353 16:09:19 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:52.353 16:09:19 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.353 16:09:19 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:52.353 16:09:19 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:52.353 16:09:19 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:52.353 16:09:19 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:52.353 16:09:19 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.353 16:09:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:52.353 16:09:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:52.353 16:09:19 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:52.353 16:09:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:54.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:54.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:54.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:54.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:54.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:27:54.255 00:27:54.255 --- 10.0.0.2 ping statistics --- 00:27:54.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.255 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:27:54.255 00:27:54.255 --- 10.0.0.1 ping statistics --- 00:27:54.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.255 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:54.255 16:09:20 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:55.190 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:55.190 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:55.190 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:55.190 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:55.190 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:55.190 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:55.190 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:55.190 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:55.190 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:55.190 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:55.190 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:55.190 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:55.190 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:55.190 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:55.190 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:55.190 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:55.448 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:55.448 16:09:22 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:55.448 16:09:22 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:55.448 16:09:22 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:55.448 16:09:22 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:55.448 16:09:22 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:55.448 16:09:22 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:55.448 16:09:22 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:55.448 16:09:22 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:55.448 16:09:22 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:55.448 16:09:22 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:55.448 16:09:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:55.448 16:09:22 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1272724 00:27:55.448 16:09:22 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:55.448 16:09:22 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1272724 00:27:55.448 16:09:22 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1272724 ']' 00:27:55.448 16:09:22 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.448 16:09:22 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:55.448 16:09:22 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.448 16:09:22 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:55.448 16:09:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:55.448 [2024-07-15 16:09:22.361451] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:55.448 [2024-07-15 16:09:22.361526] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.707 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.707 [2024-07-15 16:09:22.426386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.707 [2024-07-15 16:09:22.533905] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.707 [2024-07-15 16:09:22.533961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.707 [2024-07-15 16:09:22.533990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.707 [2024-07-15 16:09:22.534001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.707 [2024-07-15 16:09:22.534011] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.707 [2024-07-15 16:09:22.534048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.966 16:09:22 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:55.966 16:09:22 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:27:55.966 16:09:22 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:55.966 16:09:22 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:55.966 16:09:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:55.966 16:09:22 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.966 16:09:22 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:55.966 16:09:22 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:55.966 16:09:22 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.966 16:09:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:55.966 [2024-07-15 16:09:22.681515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.966 16:09:22 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.966 16:09:22 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:55.966 16:09:22 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:55.966 16:09:22 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.966 16:09:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:55.966 ************************************ 00:27:55.966 START TEST fio_dif_1_default 00:27:55.966 ************************************ 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:55.966 bdev_null0 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:55.966 [2024-07-15 16:09:22.745816] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.966 { 00:27:55.966 "params": { 00:27:55.966 "name": "Nvme$subsystem", 00:27:55.966 "trtype": "$TEST_TRANSPORT", 00:27:55.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.966 "adrfam": "ipv4", 00:27:55.966 "trsvcid": "$NVMF_PORT", 00:27:55.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.966 "hdgst": ${hdgst:-false}, 00:27:55.966 "ddgst": ${ddgst:-false} 00:27:55.966 }, 00:27:55.966 "method": "bdev_nvme_attach_controller" 00:27:55.966 } 00:27:55.966 EOF 00:27:55.966 )") 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:55.966 "params": { 00:27:55.966 "name": "Nvme0", 00:27:55.966 "trtype": "tcp", 00:27:55.966 "traddr": "10.0.0.2", 00:27:55.966 "adrfam": "ipv4", 00:27:55.966 "trsvcid": "4420", 00:27:55.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:55.966 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:55.966 "hdgst": false, 00:27:55.966 "ddgst": false 00:27:55.966 }, 00:27:55.966 "method": "bdev_nvme_attach_controller" 00:27:55.966 }' 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:55.966 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:55.967 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:55.967 16:09:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:56.224 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:56.224 fio-3.35 00:27:56.224 Starting 1 thread 00:27:56.224 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.452 00:28:08.452 filename0: (groupid=0, jobs=1): err= 0: pid=1272954: Mon Jul 15 16:09:33 2024 00:28:08.452 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10021msec) 00:28:08.452 slat (nsec): min=6846, max=62308, avg=8811.89, stdev=3451.48 00:28:08.452 clat (usec): min=40879, max=45423, avg=41040.28, stdev=350.15 00:28:08.452 lat (usec): min=40886, max=45452, avg=41049.09, stdev=350.82 00:28:08.452 clat percentiles (usec): 00:28:08.452 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:08.452 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:08.452 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:28:08.452 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:28:08.452 | 99.99th=[45351] 00:28:08.452 bw ( KiB/s): min= 384, max= 416, per=99.59%, avg=388.80, stdev=11.72, samples=20 00:28:08.452 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:28:08.452 lat (msec) : 50=100.00% 00:28:08.452 cpu : usr=89.75%, sys=9.98%, ctx=20, majf=0, minf=251 00:28:08.452 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.452 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.452 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:08.452 00:28:08.452 Run status group 0 (all jobs): 00:28:08.452 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10021-10021msec 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.452 00:28:08.452 real 0m11.278s 00:28:08.452 user 0m10.344s 00:28:08.452 sys 0m1.290s 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:08.452 16:09:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:08.452 ************************************ 00:28:08.452 END TEST fio_dif_1_default 00:28:08.452 ************************************ 00:28:08.452 16:09:34 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:08.453 16:09:34 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:08.453 16:09:34 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:08.453 16:09:34 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.453 16:09:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:08.453 ************************************ 00:28:08.453 START TEST fio_dif_1_multi_subsystems 00:28:08.453 ************************************ 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:08.453 bdev_null0 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:08.453 [2024-07-15 16:09:34.076354] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:08.453 bdev_null1 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.453 { 00:28:08.453 "params": { 00:28:08.453 "name": "Nvme$subsystem", 00:28:08.453 "trtype": "$TEST_TRANSPORT", 00:28:08.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.453 "adrfam": "ipv4", 00:28:08.453 "trsvcid": "$NVMF_PORT", 00:28:08.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.453 "hdgst": ${hdgst:-false}, 00:28:08.453 "ddgst": ${ddgst:-false} 00:28:08.453 }, 00:28:08.453 "method": "bdev_nvme_attach_controller" 00:28:08.453 } 00:28:08.453 EOF 00:28:08.453 )") 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.453 { 00:28:08.453 "params": { 00:28:08.453 "name": "Nvme$subsystem", 00:28:08.453 "trtype": "$TEST_TRANSPORT", 00:28:08.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.453 "adrfam": "ipv4", 00:28:08.453 "trsvcid": "$NVMF_PORT", 00:28:08.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.453 "hdgst": ${hdgst:-false}, 00:28:08.453 "ddgst": ${ddgst:-false} 00:28:08.453 }, 00:28:08.453 "method": "bdev_nvme_attach_controller" 00:28:08.453 } 00:28:08.453 EOF 00:28:08.453 )") 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:08.453 "params": { 00:28:08.453 "name": "Nvme0", 00:28:08.453 "trtype": "tcp", 00:28:08.453 "traddr": "10.0.0.2", 00:28:08.453 "adrfam": "ipv4", 00:28:08.453 "trsvcid": "4420", 00:28:08.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:08.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:08.453 "hdgst": false, 00:28:08.453 "ddgst": false 00:28:08.453 }, 00:28:08.453 "method": "bdev_nvme_attach_controller" 00:28:08.453 },{ 00:28:08.453 "params": { 00:28:08.453 "name": "Nvme1", 00:28:08.453 "trtype": "tcp", 00:28:08.453 "traddr": "10.0.0.2", 00:28:08.453 "adrfam": "ipv4", 00:28:08.453 "trsvcid": "4420", 00:28:08.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:08.453 "hdgst": false, 00:28:08.453 "ddgst": false 00:28:08.453 }, 00:28:08.453 "method": "bdev_nvme_attach_controller" 00:28:08.453 }' 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:08.453 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:08.454 16:09:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.454 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:08.454 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:08.454 fio-3.35 00:28:08.454 Starting 2 threads 00:28:08.454 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.416 00:28:18.416 filename0: (groupid=0, jobs=1): err= 0: pid=1274364: Mon Jul 15 16:09:45 2024 00:28:18.416 read: IOPS=189, BW=758KiB/s (777kB/s)(7616KiB/10041msec) 00:28:18.416 slat (nsec): min=7399, max=51161, avg=11435.62, stdev=6569.78 00:28:18.416 clat (usec): min=787, max=43824, avg=21057.82, stdev=20139.62 00:28:18.416 lat (usec): min=795, max=43839, avg=21069.26, stdev=20137.81 00:28:18.416 clat percentiles (usec): 00:28:18.416 | 1.00th=[ 799], 5.00th=[ 807], 10.00th=[ 816], 20.00th=[ 832], 00:28:18.416 | 30.00th=[ 840], 40.00th=[ 857], 50.00th=[41157], 60.00th=[41157], 00:28:18.416 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:18.416 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:28:18.416 | 99.99th=[43779] 00:28:18.416 bw ( KiB/s): min= 672, max= 768, per=50.21%, avg=760.00, stdev=22.92, samples=20 00:28:18.416 iops : min= 168, max= 192, avg=190.00, stdev= 5.73, samples=20 00:28:18.416 lat (usec) : 1000=49.79% 00:28:18.416 lat (msec) : 50=50.21% 00:28:18.416 cpu : usr=96.25%, sys=3.43%, ctx=34, majf=0, minf=220 00:28:18.416 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:18.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.416 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.416 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:18.416 filename1: (groupid=0, jobs=1): err= 0: pid=1274365: Mon Jul 15 16:09:45 2024 00:28:18.416 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10003msec) 00:28:18.416 slat (nsec): min=6521, max=60851, avg=10175.24, stdev=4984.70 00:28:18.416 clat (usec): min=798, max=43833, avg=21070.95, stdev=20140.87 00:28:18.416 lat (usec): min=805, max=43848, avg=21081.12, stdev=20139.52 00:28:18.416 clat percentiles (usec): 00:28:18.416 | 1.00th=[ 807], 5.00th=[ 816], 10.00th=[ 824], 20.00th=[ 840], 00:28:18.416 | 30.00th=[ 848], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:28:18.416 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:18.416 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:28:18.416 | 99.99th=[43779] 00:28:18.416 bw ( KiB/s): min= 672, max= 768, per=50.14%, avg=759.58, stdev=25.78, samples=19 00:28:18.416 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:28:18.416 lat (usec) : 1000=49.16% 00:28:18.416 lat (msec) : 2=0.63%, 50=50.21% 00:28:18.416 cpu : usr=97.15%, sys=2.57%, ctx=17, majf=0, minf=114 00:28:18.416 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:18.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.416 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.416 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:18.416 00:28:18.416 Run status group 0 (all jobs): 00:28:18.416 READ: bw=1514KiB/s (1550kB/s), 758KiB/s-758KiB/s (776kB/s-777kB/s), io=14.8MiB (15.6MB), run=10003-10041msec 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.675 00:28:18.675 real 0m11.398s 00:28:18.675 user 0m20.730s 00:28:18.675 sys 0m0.892s 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:18.675 16:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:18.675 ************************************ 00:28:18.675 END TEST fio_dif_1_multi_subsystems 00:28:18.675 ************************************ 00:28:18.675 16:09:45 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:18.676 16:09:45 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:18.676 16:09:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:18.676 16:09:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:18.676 16:09:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:18.676 ************************************ 00:28:18.676 START TEST fio_dif_rand_params 00:28:18.676 ************************************ 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.676 bdev_null0 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.676 [2024-07-15 16:09:45.516926] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.676 { 00:28:18.676 "params": { 00:28:18.676 "name": "Nvme$subsystem", 00:28:18.676 "trtype": "$TEST_TRANSPORT", 00:28:18.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.676 "adrfam": "ipv4", 00:28:18.676 "trsvcid": "$NVMF_PORT", 00:28:18.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.676 "hdgst": ${hdgst:-false}, 00:28:18.676 "ddgst": ${ddgst:-false} 00:28:18.676 }, 00:28:18.676 "method": "bdev_nvme_attach_controller" 00:28:18.676 } 00:28:18.676 EOF 00:28:18.676 )") 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:18.676 "params": { 00:28:18.676 "name": "Nvme0", 00:28:18.676 "trtype": "tcp", 00:28:18.676 "traddr": "10.0.0.2", 00:28:18.676 "adrfam": "ipv4", 00:28:18.676 "trsvcid": "4420", 00:28:18.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.676 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:18.676 "hdgst": false, 00:28:18.676 "ddgst": false 00:28:18.676 }, 00:28:18.676 "method": "bdev_nvme_attach_controller" 00:28:18.676 }' 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:18.676 16:09:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:18.934 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:18.934 ... 00:28:18.934 fio-3.35 00:28:18.934 Starting 3 threads 00:28:18.934 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.496 00:28:25.496 filename0: (groupid=0, jobs=1): err= 0: pid=1275762: Mon Jul 15 16:09:51 2024 00:28:25.496 read: IOPS=174, BW=21.8MiB/s (22.9MB/s)(109MiB/5001msec) 00:28:25.496 slat (nsec): min=4313, max=34948, avg=12686.96, stdev=2338.51 00:28:25.496 clat (usec): min=5308, max=94724, avg=17164.13, stdev=15762.76 00:28:25.496 lat (usec): min=5321, max=94738, avg=17176.82, stdev=15762.75 00:28:25.496 clat percentiles (usec): 00:28:25.496 | 1.00th=[ 5473], 5.00th=[ 6063], 10.00th=[ 6587], 20.00th=[ 8029], 00:28:25.496 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[11469], 60.00th=[12780], 00:28:25.496 | 70.00th=[13960], 80.00th=[15926], 90.00th=[52167], 95.00th=[54264], 00:28:25.496 | 99.00th=[57410], 99.50th=[58459], 99.90th=[94897], 99.95th=[94897], 00:28:25.496 | 99.99th=[94897] 00:28:25.496 bw ( KiB/s): min=15872, max=32256, per=27.05%, avg=21418.67, stdev=5604.30, samples=9 00:28:25.496 iops : min= 124, max= 252, avg=167.33, stdev=43.78, samples=9 00:28:25.496 lat (msec) : 10=38.37%, 20=46.62%, 50=0.92%, 100=14.09% 00:28:25.496 cpu : usr=93.04%, sys=6.48%, ctx=18, majf=0, minf=135 00:28:25.496 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:25.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.496 issued rwts: total=873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.496 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:25.496 filename0: (groupid=0, jobs=1): err= 0: pid=1275763: Mon Jul 15 16:09:51 2024 00:28:25.496 read: IOPS=224, BW=28.1MiB/s (29.4MB/s)(141MiB/5026msec) 00:28:25.496 slat (nsec): min=4910, max=37389, avg=14148.76, stdev=3360.01 00:28:25.496 clat (usec): min=5297, max=91195, avg=13344.63, stdev=13088.34 00:28:25.496 lat (usec): min=5310, max=91214, avg=13358.78, stdev=13088.46 00:28:25.496 clat percentiles (usec): 00:28:25.496 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6063], 20.00th=[ 6587], 00:28:25.496 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[10159], 00:28:25.496 | 70.00th=[11338], 80.00th=[12387], 90.00th=[21103], 95.00th=[50594], 00:28:25.496 | 99.00th=[52691], 99.50th=[53216], 99.90th=[90702], 99.95th=[90702], 00:28:25.496 | 99.99th=[90702] 00:28:25.496 bw ( KiB/s): min=23808, max=33792, per=36.38%, avg=28805.50, stdev=3659.01, samples=10 00:28:25.496 iops : min= 186, max= 264, avg=225.00, stdev=28.60, samples=10 00:28:25.496 lat (msec) : 10=58.24%, 20=31.65%, 50=3.90%, 100=6.21% 00:28:25.496 cpu : usr=91.70%, sys=7.76%, ctx=11, majf=0, minf=92 00:28:25.496 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:25.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.496 issued rwts: total=1128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.496 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:25.496 filename0: (groupid=0, jobs=1): err= 0: pid=1275764: Mon Jul 15 16:09:51 2024 00:28:25.496 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(139MiB/5005msec) 00:28:25.496 slat (nsec): min=4215, max=31120, avg=12720.88, stdev=2534.32 00:28:25.496 clat (usec): min=4887, max=54997, avg=13527.04, stdev=13197.79 00:28:25.496 lat (usec): min=4900, max=55012, avg=13539.76, stdev=13197.90 00:28:25.496 clat percentiles (usec): 00:28:25.496 | 1.00th=[ 5800], 5.00th=[ 5997], 10.00th=[ 6128], 20.00th=[ 7177], 00:28:25.496 | 30.00th=[ 7898], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[10028], 00:28:25.496 | 70.00th=[10945], 80.00th=[11994], 90.00th=[48497], 95.00th=[51119], 00:28:25.496 | 99.00th=[53740], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:28:25.496 | 99.99th=[54789] 00:28:25.496 bw ( KiB/s): min=21504, max=35328, per=35.76%, avg=28313.60, stdev=5252.26, samples=10 00:28:25.496 iops : min= 168, max= 276, avg=221.20, stdev=41.03, samples=10 00:28:25.496 lat (msec) : 10=59.48%, 20=29.69%, 50=4.15%, 100=6.68% 00:28:25.496 cpu : usr=92.83%, sys=6.67%, ctx=34, majf=0, minf=159 00:28:25.496 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:25.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.496 issued rwts: total=1108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.496 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:25.496 00:28:25.496 Run status group 0 (all jobs): 00:28:25.496 READ: bw=77.3MiB/s (81.1MB/s), 21.8MiB/s-28.1MiB/s (22.9MB/s-29.4MB/s), io=389MiB (408MB), run=5001-5026msec 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.496 bdev_null0 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.496 [2024-07-15 16:09:51.715224] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.496 bdev_null1 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.496 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.497 bdev_null2 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:25.497 { 00:28:25.497 "params": { 00:28:25.497 "name": "Nvme$subsystem", 00:28:25.497 "trtype": "$TEST_TRANSPORT", 00:28:25.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.497 "adrfam": "ipv4", 00:28:25.497 "trsvcid": "$NVMF_PORT", 00:28:25.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.497 "hdgst": ${hdgst:-false}, 00:28:25.497 "ddgst": ${ddgst:-false} 00:28:25.497 }, 00:28:25.497 "method": "bdev_nvme_attach_controller" 00:28:25.497 } 00:28:25.497 EOF 00:28:25.497 )") 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:25.497 { 00:28:25.497 "params": { 00:28:25.497 "name": "Nvme$subsystem", 00:28:25.497 "trtype": "$TEST_TRANSPORT", 00:28:25.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.497 "adrfam": "ipv4", 00:28:25.497 "trsvcid": "$NVMF_PORT", 00:28:25.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.497 "hdgst": ${hdgst:-false}, 00:28:25.497 "ddgst": ${ddgst:-false} 00:28:25.497 }, 00:28:25.497 "method": "bdev_nvme_attach_controller" 00:28:25.497 } 00:28:25.497 EOF 00:28:25.497 )") 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:25.497 { 00:28:25.497 "params": { 00:28:25.497 "name": "Nvme$subsystem", 00:28:25.497 "trtype": "$TEST_TRANSPORT", 00:28:25.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.497 "adrfam": "ipv4", 00:28:25.497 "trsvcid": "$NVMF_PORT", 00:28:25.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.497 "hdgst": ${hdgst:-false}, 00:28:25.497 "ddgst": ${ddgst:-false} 00:28:25.497 }, 00:28:25.497 "method": "bdev_nvme_attach_controller" 00:28:25.497 } 00:28:25.497 EOF 00:28:25.497 )") 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:25.497 "params": { 00:28:25.497 "name": "Nvme0", 00:28:25.497 "trtype": "tcp", 00:28:25.497 "traddr": "10.0.0.2", 00:28:25.497 "adrfam": "ipv4", 00:28:25.497 "trsvcid": "4420", 00:28:25.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:25.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:25.497 "hdgst": false, 00:28:25.497 "ddgst": false 00:28:25.497 }, 00:28:25.497 "method": "bdev_nvme_attach_controller" 00:28:25.497 },{ 00:28:25.497 "params": { 00:28:25.497 "name": "Nvme1", 00:28:25.497 "trtype": "tcp", 00:28:25.497 "traddr": "10.0.0.2", 00:28:25.497 "adrfam": "ipv4", 00:28:25.497 "trsvcid": "4420", 00:28:25.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:25.497 "hdgst": false, 00:28:25.497 "ddgst": false 00:28:25.497 }, 00:28:25.497 "method": "bdev_nvme_attach_controller" 00:28:25.497 },{ 00:28:25.497 "params": { 00:28:25.497 "name": "Nvme2", 00:28:25.497 "trtype": "tcp", 00:28:25.497 "traddr": "10.0.0.2", 00:28:25.497 "adrfam": "ipv4", 00:28:25.497 "trsvcid": "4420", 00:28:25.497 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:25.497 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:25.497 "hdgst": false, 00:28:25.497 "ddgst": false 00:28:25.497 }, 00:28:25.497 "method": "bdev_nvme_attach_controller" 00:28:25.497 }' 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:25.497 16:09:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:25.497 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:25.497 ... 00:28:25.497 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:25.497 ... 00:28:25.497 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:25.497 ... 00:28:25.497 fio-3.35 00:28:25.497 Starting 24 threads 00:28:25.497 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.720 00:28:37.720 filename0: (groupid=0, jobs=1): err= 0: pid=1276629: Mon Jul 15 16:10:03 2024 00:28:37.720 read: IOPS=79, BW=319KiB/s (326kB/s)(3192KiB/10014msec) 00:28:37.720 slat (nsec): min=7686, max=60119, avg=26366.73, stdev=9972.67 00:28:37.720 clat (msec): min=21, max=300, avg=200.53, stdev=37.00 00:28:37.720 lat (msec): min=21, max=300, avg=200.56, stdev=37.01 00:28:37.720 clat percentiles (msec): 00:28:37.720 | 1.00th=[ 22], 5.00th=[ 128], 10.00th=[ 161], 20.00th=[ 194], 00:28:37.720 | 30.00th=[ 199], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.720 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 224], 95.00th=[ 232], 00:28:37.720 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 300], 99.95th=[ 300], 00:28:37.720 | 99.99th=[ 300] 00:28:37.720 bw ( KiB/s): min= 256, max= 384, per=4.03%, avg=313.60, stdev=60.85, samples=20 00:28:37.720 iops : min= 64, max= 96, avg=78.40, stdev=15.21, samples=20 00:28:37.720 lat (msec) : 50=1.75%, 250=94.24%, 500=4.01% 00:28:37.720 cpu : usr=97.72%, sys=1.93%, ctx=12, majf=0, minf=9 00:28:37.720 IO depths : 1=3.5%, 2=9.8%, 4=25.1%, 8=52.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:28:37.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.720 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.720 issued rwts: total=798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.720 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.720 filename0: (groupid=0, jobs=1): err= 0: pid=1276630: Mon Jul 15 16:10:03 2024 00:28:37.720 read: IOPS=81, BW=325KiB/s (333kB/s)(3264KiB/10029msec) 00:28:37.720 slat (usec): min=4, max=373, avg=62.91, stdev=30.48 00:28:37.720 clat (msec): min=28, max=317, avg=195.94, stdev=41.07 00:28:37.720 lat (msec): min=28, max=317, avg=196.00, stdev=41.07 00:28:37.720 clat percentiles (msec): 00:28:37.720 | 1.00th=[ 29], 5.00th=[ 77], 10.00th=[ 161], 20.00th=[ 197], 00:28:37.720 | 30.00th=[ 199], 40.00th=[ 201], 50.00th=[ 203], 60.00th=[ 205], 00:28:37.720 | 70.00th=[ 207], 80.00th=[ 218], 90.00th=[ 224], 95.00th=[ 226], 00:28:37.720 | 99.00th=[ 264], 99.50th=[ 313], 99.90th=[ 317], 99.95th=[ 317], 00:28:37.720 | 99.99th=[ 317] 00:28:37.720 bw ( KiB/s): min= 256, max= 512, per=4.11%, avg=320.00, stdev=77.69, samples=20 00:28:37.720 iops : min= 64, max= 128, avg=80.00, stdev=19.42, samples=20 00:28:37.720 lat (msec) : 50=1.96%, 100=3.92%, 250=92.65%, 500=1.47% 00:28:37.720 cpu : usr=94.01%, sys=3.34%, ctx=150, majf=0, minf=9 00:28:37.720 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:28:37.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.720 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.720 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.720 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.720 filename0: (groupid=0, jobs=1): err= 0: pid=1276631: Mon Jul 15 16:10:03 2024 00:28:37.720 read: IOPS=78, BW=314KiB/s (321kB/s)(3136KiB/10003msec) 00:28:37.720 slat (usec): min=8, max=140, avg=36.57, stdev=12.96 00:28:37.720 clat (msec): min=120, max=231, avg=203.79, stdev=19.98 00:28:37.720 lat (msec): min=120, max=231, avg=203.83, stdev=19.99 00:28:37.720 clat percentiles (msec): 00:28:37.720 | 1.00th=[ 121], 5.00th=[ 174], 10.00th=[ 192], 20.00th=[ 197], 00:28:37.720 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.720 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 224], 00:28:37.720 | 99.00th=[ 232], 99.50th=[ 232], 99.90th=[ 232], 99.95th=[ 232], 00:28:37.720 | 99.99th=[ 232] 00:28:37.720 bw ( KiB/s): min= 256, max= 384, per=3.98%, avg=309.89, stdev=63.38, samples=19 00:28:37.720 iops : min= 64, max= 96, avg=77.47, stdev=15.84, samples=19 00:28:37.720 lat (msec) : 250=100.00% 00:28:37.721 cpu : usr=96.99%, sys=1.98%, ctx=39, majf=0, minf=9 00:28:37.721 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:37.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.721 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.721 filename0: (groupid=0, jobs=1): err= 0: pid=1276632: Mon Jul 15 16:10:03 2024 00:28:37.721 read: IOPS=79, BW=319KiB/s (327kB/s)(3200KiB/10018msec) 00:28:37.721 slat (nsec): min=6242, max=99181, avg=58876.54, stdev=12777.19 00:28:37.721 clat (msec): min=25, max=319, avg=199.90, stdev=39.61 00:28:37.721 lat (msec): min=25, max=319, avg=199.96, stdev=39.61 00:28:37.721 clat percentiles (msec): 00:28:37.721 | 1.00th=[ 47], 5.00th=[ 126], 10.00th=[ 161], 20.00th=[ 197], 00:28:37.721 | 30.00th=[ 201], 40.00th=[ 201], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.721 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 226], 95.00th=[ 241], 00:28:37.721 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 321], 99.95th=[ 321], 00:28:37.721 | 99.99th=[ 321] 00:28:37.721 bw ( KiB/s): min= 256, max= 384, per=4.03%, avg=313.60, stdev=62.38, samples=20 00:28:37.721 iops : min= 64, max= 96, avg=78.40, stdev=15.59, samples=20 00:28:37.721 lat (msec) : 50=2.00%, 100=2.25%, 250=91.50%, 500=4.25% 00:28:37.721 cpu : usr=97.97%, sys=1.60%, ctx=15, majf=0, minf=9 00:28:37.721 IO depths : 1=3.0%, 2=9.0%, 4=24.0%, 8=54.5%, 16=9.5%, 32=0.0%, >=64=0.0% 00:28:37.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.721 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.721 filename0: (groupid=0, jobs=1): err= 0: pid=1276633: Mon Jul 15 16:10:03 2024 00:28:37.721 read: IOPS=78, BW=312KiB/s (320kB/s)(3128KiB/10015msec) 00:28:37.721 slat (nsec): min=8406, max=55599, avg=25955.10, stdev=9624.11 00:28:37.721 clat (msec): min=21, max=296, avg=204.65, stdev=36.04 00:28:37.721 lat (msec): min=21, max=296, avg=204.68, stdev=36.04 00:28:37.721 clat percentiles (msec): 00:28:37.721 | 1.00th=[ 22], 5.00th=[ 150], 10.00th=[ 192], 20.00th=[ 199], 00:28:37.721 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.721 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 226], 95.00th=[ 266], 00:28:37.721 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 296], 99.95th=[ 296], 00:28:37.721 | 99.99th=[ 296] 00:28:37.721 bw ( KiB/s): min= 256, max= 384, per=3.95%, avg=307.20, stdev=62.85, samples=20 00:28:37.721 iops : min= 64, max= 96, avg=76.80, stdev=15.71, samples=20 00:28:37.721 lat (msec) : 50=1.79%, 250=91.82%, 500=6.39% 00:28:37.721 cpu : usr=97.89%, sys=1.74%, ctx=17, majf=0, minf=9 00:28:37.721 IO depths : 1=3.2%, 2=9.5%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:28:37.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 issued rwts: total=782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.721 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.721 filename0: (groupid=0, jobs=1): err= 0: pid=1276634: Mon Jul 15 16:10:03 2024 00:28:37.721 read: IOPS=78, BW=312KiB/s (320kB/s)(3136KiB/10045msec) 00:28:37.721 slat (usec): min=8, max=103, avg=35.48, stdev=23.72 00:28:37.721 clat (msec): min=19, max=319, avg=204.49, stdev=32.51 00:28:37.721 lat (msec): min=19, max=319, avg=204.53, stdev=32.51 00:28:37.721 clat percentiles (msec): 00:28:37.721 | 1.00th=[ 61], 5.00th=[ 161], 10.00th=[ 194], 20.00th=[ 199], 00:28:37.721 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.721 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 226], 00:28:37.721 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 321], 99.95th=[ 321], 00:28:37.721 | 99.99th=[ 321] 00:28:37.721 bw ( KiB/s): min= 128, max= 384, per=3.90%, avg=303.16, stdev=76.45, samples=19 00:28:37.721 iops : min= 32, max= 96, avg=75.79, stdev=19.11, samples=19 00:28:37.721 lat (msec) : 20=0.51%, 100=1.53%, 250=95.41%, 500=2.55% 00:28:37.721 cpu : usr=96.95%, sys=2.05%, ctx=81, majf=0, minf=9 00:28:37.721 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:37.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.721 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.721 filename0: (groupid=0, jobs=1): err= 0: pid=1276635: Mon Jul 15 16:10:03 2024 00:28:37.721 read: IOPS=78, BW=312KiB/s (320kB/s)(3128KiB/10012msec) 00:28:37.721 slat (usec): min=5, max=157, avg=56.63, stdev=17.50 00:28:37.721 clat (msec): min=14, max=321, avg=204.36, stdev=43.56 00:28:37.721 lat (msec): min=14, max=321, avg=204.42, stdev=43.56 00:28:37.721 clat percentiles (msec): 00:28:37.721 | 1.00th=[ 15], 5.00th=[ 133], 10.00th=[ 192], 20.00th=[ 199], 00:28:37.721 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 209], 00:28:37.721 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 232], 95.00th=[ 271], 00:28:37.721 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 321], 00:28:37.721 | 99.99th=[ 321] 00:28:37.721 bw ( KiB/s): min= 128, max= 384, per=3.90%, avg=303.16, stdev=71.05, samples=19 00:28:37.721 iops : min= 32, max= 96, avg=75.79, stdev=17.76, samples=19 00:28:37.721 lat (msec) : 20=1.79%, 100=2.05%, 250=88.24%, 500=7.93% 00:28:37.721 cpu : usr=94.88%, sys=3.01%, ctx=80, majf=0, minf=9 00:28:37.721 IO depths : 1=3.2%, 2=9.5%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:28:37.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 issued rwts: total=782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.721 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.721 filename0: (groupid=0, jobs=1): err= 0: pid=1276636: Mon Jul 15 16:10:03 2024 00:28:37.721 read: IOPS=76, BW=307KiB/s (315kB/s)(3072KiB/10002msec) 00:28:37.721 slat (usec): min=21, max=102, avg=58.94, stdev=10.43 00:28:37.721 clat (msec): min=74, max=380, avg=207.87, stdev=31.12 00:28:37.721 lat (msec): min=74, max=380, avg=207.93, stdev=31.12 00:28:37.721 clat percentiles (msec): 00:28:37.721 | 1.00th=[ 75], 5.00th=[ 192], 10.00th=[ 197], 20.00th=[ 199], 00:28:37.721 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.721 | 70.00th=[ 218], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 241], 00:28:37.721 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 380], 99.95th=[ 380], 00:28:37.721 | 99.99th=[ 380] 00:28:37.721 bw ( KiB/s): min= 128, max= 384, per=3.90%, avg=303.16, stdev=75.14, samples=19 00:28:37.721 iops : min= 32, max= 96, avg=75.79, stdev=18.78, samples=19 00:28:37.721 lat (msec) : 100=2.08%, 250=94.27%, 500=3.65% 00:28:37.721 cpu : usr=96.57%, sys=2.27%, ctx=80, majf=0, minf=9 00:28:37.721 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:28:37.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.721 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.721 filename1: (groupid=0, jobs=1): err= 0: pid=1276637: Mon Jul 15 16:10:03 2024 00:28:37.721 read: IOPS=76, BW=307KiB/s (314kB/s)(3072KiB/10003msec) 00:28:37.721 slat (nsec): min=14812, max=95671, avg=61773.75, stdev=11145.15 00:28:37.721 clat (msec): min=153, max=283, avg=207.85, stdev=16.62 00:28:37.721 lat (msec): min=153, max=283, avg=207.91, stdev=16.62 00:28:37.721 clat percentiles (msec): 00:28:37.721 | 1.00th=[ 161], 5.00th=[ 192], 10.00th=[ 197], 20.00th=[ 199], 00:28:37.721 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.721 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 226], 00:28:37.721 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 284], 00:28:37.721 | 99.99th=[ 284] 00:28:37.721 bw ( KiB/s): min= 240, max= 384, per=3.90%, avg=303.16, stdev=63.66, samples=19 00:28:37.721 iops : min= 60, max= 96, avg=75.79, stdev=15.91, samples=19 00:28:37.721 lat (msec) : 250=97.92%, 500=2.08% 00:28:37.721 cpu : usr=97.80%, sys=1.66%, ctx=30, majf=0, minf=9 00:28:37.721 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:37.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.721 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.721 filename1: (groupid=0, jobs=1): err= 0: pid=1276638: Mon Jul 15 16:10:03 2024 00:28:37.721 read: IOPS=78, BW=313KiB/s (321kB/s)(3136KiB/10009msec) 00:28:37.721 slat (usec): min=14, max=149, avg=28.70, stdev=14.87 00:28:37.721 clat (msec): min=14, max=317, avg=204.04, stdev=44.19 00:28:37.721 lat (msec): min=14, max=317, avg=204.07, stdev=44.19 00:28:37.721 clat percentiles (msec): 00:28:37.721 | 1.00th=[ 15], 5.00th=[ 132], 10.00th=[ 192], 20.00th=[ 199], 00:28:37.721 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 209], 00:28:37.721 | 70.00th=[ 218], 80.00th=[ 224], 90.00th=[ 232], 95.00th=[ 271], 00:28:37.721 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:28:37.721 | 99.99th=[ 317] 00:28:37.721 bw ( KiB/s): min= 128, max= 384, per=3.90%, avg=303.16, stdev=71.05, samples=19 00:28:37.721 iops : min= 32, max= 96, avg=75.79, stdev=17.76, samples=19 00:28:37.721 lat (msec) : 20=2.04%, 100=2.04%, 250=88.27%, 500=7.65% 00:28:37.721 cpu : usr=95.40%, sys=2.82%, ctx=161, majf=0, minf=9 00:28:37.721 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:37.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.721 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.721 filename1: (groupid=0, jobs=1): err= 0: pid=1276639: Mon Jul 15 16:10:03 2024 00:28:37.721 read: IOPS=78, BW=313KiB/s (321kB/s)(3136KiB/10012msec) 00:28:37.721 slat (nsec): min=5445, max=83358, avg=26714.41, stdev=9214.74 00:28:37.721 clat (msec): min=14, max=321, avg=204.11, stdev=44.38 00:28:37.721 lat (msec): min=14, max=321, avg=204.13, stdev=44.38 00:28:37.721 clat percentiles (msec): 00:28:37.721 | 1.00th=[ 15], 5.00th=[ 133], 10.00th=[ 192], 20.00th=[ 199], 00:28:37.721 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 209], 00:28:37.721 | 70.00th=[ 218], 80.00th=[ 224], 90.00th=[ 232], 95.00th=[ 271], 00:28:37.721 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 321], 00:28:37.721 | 99.99th=[ 321] 00:28:37.721 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=302.32, stdev=71.53, samples=19 00:28:37.721 iops : min= 32, max= 96, avg=75.58, stdev=17.88, samples=19 00:28:37.721 lat (msec) : 20=2.30%, 100=1.79%, 250=88.27%, 500=7.65% 00:28:37.721 cpu : usr=97.07%, sys=1.97%, ctx=29, majf=0, minf=9 00:28:37.721 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:37.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.721 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.721 filename1: (groupid=0, jobs=1): err= 0: pid=1276640: Mon Jul 15 16:10:03 2024 00:28:37.721 read: IOPS=78, BW=313KiB/s (320kB/s)(3136KiB/10026msec) 00:28:37.721 slat (usec): min=9, max=137, avg=24.50, stdev=13.49 00:28:37.721 clat (msec): min=21, max=302, avg=204.18, stdev=34.58 00:28:37.721 lat (msec): min=21, max=302, avg=204.21, stdev=34.58 00:28:37.721 clat percentiles (msec): 00:28:37.721 | 1.00th=[ 22], 5.00th=[ 150], 10.00th=[ 194], 20.00th=[ 199], 00:28:37.721 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.721 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 232], 00:28:37.721 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 305], 99.95th=[ 305], 00:28:37.721 | 99.99th=[ 305] 00:28:37.721 bw ( KiB/s): min= 256, max= 384, per=3.95%, avg=307.20, stdev=62.85, samples=20 00:28:37.721 iops : min= 64, max= 96, avg=76.80, stdev=15.71, samples=20 00:28:37.721 lat (msec) : 50=2.04%, 250=93.62%, 500=4.34% 00:28:37.721 cpu : usr=96.18%, sys=2.61%, ctx=21, majf=0, minf=9 00:28:37.721 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:28:37.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.721 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.721 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.721 filename1: (groupid=0, jobs=1): err= 0: pid=1276641: Mon Jul 15 16:10:03 2024 00:28:37.721 read: IOPS=78, BW=314KiB/s (321kB/s)(3136KiB/10002msec) 00:28:37.721 slat (nsec): min=8987, max=62577, avg=27872.99, stdev=10601.15 00:28:37.721 clat (msec): min=118, max=231, avg=203.87, stdev=20.14 00:28:37.721 lat (msec): min=118, max=231, avg=203.89, stdev=20.14 00:28:37.721 clat percentiles (msec): 00:28:37.721 | 1.00th=[ 120], 5.00th=[ 174], 10.00th=[ 192], 20.00th=[ 197], 00:28:37.721 | 30.00th=[ 201], 40.00th=[ 205], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.721 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 224], 00:28:37.721 | 99.00th=[ 232], 99.50th=[ 232], 99.90th=[ 232], 99.95th=[ 232], 00:28:37.721 | 99.99th=[ 232] 00:28:37.721 bw ( KiB/s): min= 256, max= 384, per=3.98%, avg=309.89, stdev=64.93, samples=19 00:28:37.721 iops : min= 64, max= 96, avg=77.47, stdev=16.23, samples=19 00:28:37.721 lat (msec) : 250=100.00% 00:28:37.721 cpu : usr=96.31%, sys=2.51%, ctx=37, majf=0, minf=9 00:28:37.722 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:37.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.722 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.722 filename1: (groupid=0, jobs=1): err= 0: pid=1276642: Mon Jul 15 16:10:03 2024 00:28:37.722 read: IOPS=78, BW=314KiB/s (321kB/s)(3136KiB/10003msec) 00:28:37.722 slat (nsec): min=8656, max=72186, avg=35812.58, stdev=12180.37 00:28:37.722 clat (msec): min=131, max=245, avg=203.79, stdev=18.83 00:28:37.722 lat (msec): min=131, max=245, avg=203.83, stdev=18.83 00:28:37.722 clat percentiles (msec): 00:28:37.722 | 1.00th=[ 132], 5.00th=[ 155], 10.00th=[ 192], 20.00th=[ 194], 00:28:37.722 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.722 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 224], 00:28:37.722 | 99.00th=[ 232], 99.50th=[ 232], 99.90th=[ 247], 99.95th=[ 247], 00:28:37.722 | 99.99th=[ 247] 00:28:37.722 bw ( KiB/s): min= 256, max= 384, per=3.98%, avg=309.89, stdev=64.93, samples=19 00:28:37.722 iops : min= 64, max= 96, avg=77.47, stdev=16.23, samples=19 00:28:37.722 lat (msec) : 250=100.00% 00:28:37.722 cpu : usr=97.91%, sys=1.67%, ctx=18, majf=0, minf=9 00:28:37.722 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:37.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.722 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.722 filename1: (groupid=0, jobs=1): err= 0: pid=1276643: Mon Jul 15 16:10:03 2024 00:28:37.722 read: IOPS=78, BW=313KiB/s (321kB/s)(3136KiB/10013msec) 00:28:37.722 slat (usec): min=8, max=105, avg=59.54, stdev=11.65 00:28:37.722 clat (msec): min=14, max=322, avg=203.84, stdev=38.95 00:28:37.722 lat (msec): min=14, max=322, avg=203.90, stdev=38.96 00:28:37.722 clat percentiles (msec): 00:28:37.722 | 1.00th=[ 15], 5.00th=[ 192], 10.00th=[ 197], 20.00th=[ 199], 00:28:37.722 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.722 | 70.00th=[ 218], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 232], 00:28:37.722 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 321], 00:28:37.722 | 99.99th=[ 321] 00:28:37.722 bw ( KiB/s): min= 128, max= 384, per=3.90%, avg=303.16, stdev=72.44, samples=19 00:28:37.722 iops : min= 32, max= 96, avg=75.79, stdev=18.11, samples=19 00:28:37.722 lat (msec) : 20=2.04%, 100=2.04%, 250=93.62%, 500=2.30% 00:28:37.722 cpu : usr=97.85%, sys=1.73%, ctx=22, majf=0, minf=9 00:28:37.722 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:28:37.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.722 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.722 filename1: (groupid=0, jobs=1): err= 0: pid=1276644: Mon Jul 15 16:10:03 2024 00:28:37.722 read: IOPS=113, BW=454KiB/s (464kB/s)(4544KiB/10019msec) 00:28:37.722 slat (nsec): min=5454, max=26987, avg=10187.92, stdev=3200.93 00:28:37.722 clat (msec): min=46, max=256, avg=140.99, stdev=29.71 00:28:37.722 lat (msec): min=46, max=256, avg=141.00, stdev=29.71 00:28:37.722 clat percentiles (msec): 00:28:37.722 | 1.00th=[ 47], 5.00th=[ 107], 10.00th=[ 120], 20.00th=[ 129], 00:28:37.722 | 30.00th=[ 132], 40.00th=[ 136], 50.00th=[ 138], 60.00th=[ 138], 00:28:37.722 | 70.00th=[ 142], 80.00th=[ 153], 90.00th=[ 186], 95.00th=[ 201], 00:28:37.722 | 99.00th=[ 222], 99.50th=[ 228], 99.90th=[ 257], 99.95th=[ 257], 00:28:37.722 | 99.99th=[ 257] 00:28:37.722 bw ( KiB/s): min= 352, max= 513, per=5.77%, avg=448.05, stdev=54.75, samples=20 00:28:37.722 iops : min= 88, max= 128, avg=112.00, stdev=13.67, samples=20 00:28:37.722 lat (msec) : 50=1.41%, 100=1.94%, 250=96.48%, 500=0.18% 00:28:37.722 cpu : usr=97.20%, sys=2.13%, ctx=16, majf=0, minf=9 00:28:37.722 IO depths : 1=0.4%, 2=1.0%, 4=7.6%, 8=78.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:28:37.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 complete : 0=0.0%, 4=89.1%, 8=5.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 issued rwts: total=1136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.722 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.722 filename2: (groupid=0, jobs=1): err= 0: pid=1276645: Mon Jul 15 16:10:03 2024 00:28:37.722 read: IOPS=109, BW=439KiB/s (449kB/s)(4400KiB/10026msec) 00:28:37.722 slat (nsec): min=5160, max=59201, avg=12712.02, stdev=6880.83 00:28:37.722 clat (msec): min=41, max=269, avg=145.61, stdev=32.79 00:28:37.722 lat (msec): min=41, max=269, avg=145.63, stdev=32.79 00:28:37.722 clat percentiles (msec): 00:28:37.722 | 1.00th=[ 46], 5.00th=[ 108], 10.00th=[ 121], 20.00th=[ 125], 00:28:37.722 | 30.00th=[ 131], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 150], 00:28:37.722 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 197], 95.00th=[ 199], 00:28:37.722 | 99.00th=[ 220], 99.50th=[ 234], 99.90th=[ 271], 99.95th=[ 271], 00:28:37.722 | 99.99th=[ 271] 00:28:37.722 bw ( KiB/s): min= 368, max= 513, per=5.57%, avg=433.65, stdev=56.68, samples=20 00:28:37.722 iops : min= 92, max= 128, avg=108.40, stdev=14.15, samples=20 00:28:37.722 lat (msec) : 50=1.45%, 100=2.91%, 250=95.45%, 500=0.18% 00:28:37.722 cpu : usr=97.74%, sys=1.91%, ctx=17, majf=0, minf=9 00:28:37.722 IO depths : 1=1.7%, 2=7.0%, 4=22.0%, 8=58.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:28:37.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 issued rwts: total=1100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.722 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.722 filename2: (groupid=0, jobs=1): err= 0: pid=1276646: Mon Jul 15 16:10:03 2024 00:28:37.722 read: IOPS=78, BW=313KiB/s (321kB/s)(3136KiB/10011msec) 00:28:37.722 slat (usec): min=13, max=149, avg=61.59, stdev=14.17 00:28:37.722 clat (msec): min=14, max=320, avg=203.78, stdev=41.40 00:28:37.722 lat (msec): min=14, max=320, avg=203.84, stdev=41.41 00:28:37.722 clat percentiles (msec): 00:28:37.722 | 1.00th=[ 15], 5.00th=[ 140], 10.00th=[ 194], 20.00th=[ 199], 00:28:37.722 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.722 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 226], 95.00th=[ 241], 00:28:37.722 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 321], 00:28:37.722 | 99.99th=[ 321] 00:28:37.722 bw ( KiB/s): min= 128, max= 384, per=3.90%, avg=303.16, stdev=75.14, samples=19 00:28:37.722 iops : min= 32, max= 96, avg=75.79, stdev=18.78, samples=19 00:28:37.722 lat (msec) : 20=2.04%, 100=2.04%, 250=91.07%, 500=4.85% 00:28:37.722 cpu : usr=96.82%, sys=2.12%, ctx=53, majf=0, minf=9 00:28:37.722 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:28:37.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.722 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.722 filename2: (groupid=0, jobs=1): err= 0: pid=1276647: Mon Jul 15 16:10:03 2024 00:28:37.722 read: IOPS=79, BW=320KiB/s (328kB/s)(3200KiB/10003msec) 00:28:37.722 slat (usec): min=4, max=305, avg=46.09, stdev=32.43 00:28:37.722 clat (msec): min=125, max=272, avg=199.68, stdev=23.73 00:28:37.722 lat (msec): min=125, max=272, avg=199.72, stdev=23.73 00:28:37.722 clat percentiles (msec): 00:28:37.722 | 1.00th=[ 133], 5.00th=[ 150], 10.00th=[ 155], 20.00th=[ 194], 00:28:37.722 | 30.00th=[ 199], 40.00th=[ 201], 50.00th=[ 203], 60.00th=[ 205], 00:28:37.722 | 70.00th=[ 207], 80.00th=[ 218], 90.00th=[ 224], 95.00th=[ 224], 00:28:37.722 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:28:37.722 | 99.99th=[ 271] 00:28:37.722 bw ( KiB/s): min= 256, max= 384, per=4.07%, avg=316.63, stdev=62.56, samples=19 00:28:37.722 iops : min= 64, max= 96, avg=79.16, stdev=15.64, samples=19 00:28:37.722 lat (msec) : 250=97.75%, 500=2.25% 00:28:37.722 cpu : usr=94.90%, sys=2.86%, ctx=63, majf=0, minf=9 00:28:37.722 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:28:37.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.722 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.722 filename2: (groupid=0, jobs=1): err= 0: pid=1276648: Mon Jul 15 16:10:03 2024 00:28:37.722 read: IOPS=78, BW=313KiB/s (321kB/s)(3136KiB/10009msec) 00:28:37.722 slat (nsec): min=20366, max=90994, avg=59963.44, stdev=10453.02 00:28:37.722 clat (msec): min=11, max=468, avg=203.71, stdev=47.65 00:28:37.722 lat (msec): min=11, max=468, avg=203.77, stdev=47.65 00:28:37.722 clat percentiles (msec): 00:28:37.722 | 1.00th=[ 12], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 197], 00:28:37.722 | 30.00th=[ 201], 40.00th=[ 205], 50.00th=[ 205], 60.00th=[ 209], 00:28:37.722 | 70.00th=[ 218], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 224], 00:28:37.722 | 99.00th=[ 372], 99.50th=[ 376], 99.90th=[ 468], 99.95th=[ 468], 00:28:37.722 | 99.99th=[ 468] 00:28:37.722 bw ( KiB/s): min= 128, max= 384, per=3.81%, avg=296.42, stdev=74.55, samples=19 00:28:37.722 iops : min= 32, max= 96, avg=74.11, stdev=18.64, samples=19 00:28:37.722 lat (msec) : 20=2.04%, 50=2.04%, 250=93.88%, 500=2.04% 00:28:37.722 cpu : usr=97.75%, sys=1.73%, ctx=34, majf=0, minf=9 00:28:37.722 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:37.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.722 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.722 filename2: (groupid=0, jobs=1): err= 0: pid=1276649: Mon Jul 15 16:10:03 2024 00:28:37.722 read: IOPS=78, BW=312KiB/s (320kB/s)(3128KiB/10015msec) 00:28:37.722 slat (nsec): min=8726, max=93640, avg=52389.53, stdev=16646.58 00:28:37.722 clat (msec): min=21, max=299, avg=204.43, stdev=37.28 00:28:37.722 lat (msec): min=21, max=299, avg=204.48, stdev=37.28 00:28:37.722 clat percentiles (msec): 00:28:37.722 | 1.00th=[ 22], 5.00th=[ 130], 10.00th=[ 192], 20.00th=[ 194], 00:28:37.722 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 209], 00:28:37.722 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 224], 95.00th=[ 275], 00:28:37.722 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 300], 99.95th=[ 300], 00:28:37.722 | 99.99th=[ 300] 00:28:37.722 bw ( KiB/s): min= 256, max= 384, per=3.95%, avg=307.20, stdev=59.78, samples=20 00:28:37.722 iops : min= 64, max= 96, avg=76.80, stdev=14.94, samples=20 00:28:37.722 lat (msec) : 50=1.79%, 250=92.33%, 500=5.88% 00:28:37.722 cpu : usr=96.44%, sys=2.28%, ctx=152, majf=0, minf=9 00:28:37.722 IO depths : 1=3.2%, 2=9.5%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:28:37.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 issued rwts: total=782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.722 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.722 filename2: (groupid=0, jobs=1): err= 0: pid=1276650: Mon Jul 15 16:10:03 2024 00:28:37.722 read: IOPS=78, BW=313KiB/s (321kB/s)(3136KiB/10008msec) 00:28:37.722 slat (nsec): min=13407, max=95519, avg=58653.84, stdev=13469.33 00:28:37.722 clat (msec): min=14, max=317, avg=203.72, stdev=40.66 00:28:37.722 lat (msec): min=14, max=317, avg=203.77, stdev=40.66 00:28:37.722 clat percentiles (msec): 00:28:37.722 | 1.00th=[ 15], 5.00th=[ 169], 10.00th=[ 197], 20.00th=[ 199], 00:28:37.722 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.722 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 241], 00:28:37.722 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:28:37.722 | 99.99th=[ 317] 00:28:37.722 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=302.32, stdev=75.59, samples=19 00:28:37.722 iops : min= 32, max= 96, avg=75.58, stdev=18.90, samples=19 00:28:37.722 lat (msec) : 20=2.30%, 100=1.79%, 250=92.09%, 500=3.83% 00:28:37.722 cpu : usr=97.81%, sys=1.68%, ctx=48, majf=0, minf=9 00:28:37.722 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:28:37.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.722 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.722 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.722 filename2: (groupid=0, jobs=1): err= 0: pid=1276651: Mon Jul 15 16:10:03 2024 00:28:37.722 read: IOPS=78, BW=313KiB/s (321kB/s)(3136KiB/10011msec) 00:28:37.722 slat (nsec): min=8205, max=87503, avg=54832.09, stdev=14445.43 00:28:37.722 clat (msec): min=12, max=376, avg=203.86, stdev=49.94 00:28:37.722 lat (msec): min=12, max=376, avg=203.91, stdev=49.94 00:28:37.722 clat percentiles (msec): 00:28:37.722 | 1.00th=[ 13], 5.00th=[ 130], 10.00th=[ 192], 20.00th=[ 197], 00:28:37.722 | 30.00th=[ 201], 40.00th=[ 205], 50.00th=[ 205], 60.00th=[ 209], 00:28:37.722 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 224], 95.00th=[ 275], 00:28:37.722 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:28:37.722 | 99.99th=[ 376] 00:28:37.722 bw ( KiB/s): min= 128, max= 384, per=3.81%, avg=296.42, stdev=71.83, samples=19 00:28:37.722 iops : min= 32, max= 96, avg=74.11, stdev=17.96, samples=19 00:28:37.722 lat (msec) : 20=2.04%, 50=2.04%, 250=90.82%, 500=5.10% 00:28:37.722 cpu : usr=98.07%, sys=1.50%, ctx=24, majf=0, minf=9 00:28:37.722 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:28:37.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.723 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.723 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.723 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.723 filename2: (groupid=0, jobs=1): err= 0: pid=1276652: Mon Jul 15 16:10:03 2024 00:28:37.723 read: IOPS=76, BW=307KiB/s (314kB/s)(3072KiB/10003msec) 00:28:37.723 slat (usec): min=8, max=102, avg=62.53, stdev=12.50 00:28:37.723 clat (msec): min=67, max=317, avg=207.86, stdev=20.17 00:28:37.723 lat (msec): min=67, max=317, avg=207.93, stdev=20.17 00:28:37.723 clat percentiles (msec): 00:28:37.723 | 1.00th=[ 161], 5.00th=[ 192], 10.00th=[ 197], 20.00th=[ 199], 00:28:37.723 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:28:37.723 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 226], 00:28:37.723 | 99.00th=[ 284], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:28:37.723 | 99.99th=[ 317] 00:28:37.723 bw ( KiB/s): min= 256, max= 384, per=3.90%, avg=303.16, stdev=63.44, samples=19 00:28:37.723 iops : min= 64, max= 96, avg=75.79, stdev=15.86, samples=19 00:28:37.723 lat (msec) : 100=0.26%, 250=97.14%, 500=2.60% 00:28:37.723 cpu : usr=97.70%, sys=1.75%, ctx=19, majf=0, minf=9 00:28:37.723 IO depths : 1=4.3%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:28:37.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.723 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.723 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.723 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:37.723 00:28:37.723 Run status group 0 (all jobs): 00:28:37.723 READ: bw=7768KiB/s (7955kB/s), 307KiB/s-454KiB/s (314kB/s-464kB/s), io=76.2MiB (79.9MB), run=10002-10045msec 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 bdev_null0 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 [2024-07-15 16:10:03.522592] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 bdev_null1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:37.723 { 00:28:37.723 "params": { 00:28:37.723 "name": "Nvme$subsystem", 00:28:37.723 "trtype": "$TEST_TRANSPORT", 00:28:37.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.723 "adrfam": "ipv4", 00:28:37.723 "trsvcid": "$NVMF_PORT", 00:28:37.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.723 "hdgst": ${hdgst:-false}, 00:28:37.723 "ddgst": ${ddgst:-false} 00:28:37.723 }, 00:28:37.723 "method": "bdev_nvme_attach_controller" 00:28:37.723 } 00:28:37.723 EOF 00:28:37.723 )") 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:37.723 { 00:28:37.723 "params": { 00:28:37.723 "name": "Nvme$subsystem", 00:28:37.723 "trtype": "$TEST_TRANSPORT", 00:28:37.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.723 "adrfam": "ipv4", 00:28:37.723 "trsvcid": "$NVMF_PORT", 00:28:37.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.723 "hdgst": ${hdgst:-false}, 00:28:37.723 "ddgst": ${ddgst:-false} 00:28:37.723 }, 00:28:37.723 "method": "bdev_nvme_attach_controller" 00:28:37.723 } 00:28:37.723 EOF 00:28:37.723 )") 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:37.723 16:10:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:37.723 "params": { 00:28:37.723 "name": "Nvme0", 00:28:37.723 "trtype": "tcp", 00:28:37.723 "traddr": "10.0.0.2", 00:28:37.723 "adrfam": "ipv4", 00:28:37.723 "trsvcid": "4420", 00:28:37.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:37.723 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:37.723 "hdgst": false, 00:28:37.723 "ddgst": false 00:28:37.723 }, 00:28:37.723 "method": "bdev_nvme_attach_controller" 00:28:37.723 },{ 00:28:37.724 "params": { 00:28:37.724 "name": "Nvme1", 00:28:37.724 "trtype": "tcp", 00:28:37.724 "traddr": "10.0.0.2", 00:28:37.724 "adrfam": "ipv4", 00:28:37.724 "trsvcid": "4420", 00:28:37.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.724 "hdgst": false, 00:28:37.724 "ddgst": false 00:28:37.724 }, 00:28:37.724 "method": "bdev_nvme_attach_controller" 00:28:37.724 }' 00:28:37.724 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:37.724 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:37.724 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.724 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.724 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:37.724 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:37.724 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:37.724 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:37.724 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:37.724 16:10:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.724 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:37.724 ... 00:28:37.724 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:37.724 ... 00:28:37.724 fio-3.35 00:28:37.724 Starting 4 threads 00:28:37.724 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.977 00:28:42.977 filename0: (groupid=0, jobs=1): err= 0: pid=1278031: Mon Jul 15 16:10:09 2024 00:28:42.977 read: IOPS=1791, BW=14.0MiB/s (14.7MB/s)(70.0MiB/5002msec) 00:28:42.977 slat (nsec): min=4344, max=61177, avg=13684.16, stdev=6223.31 00:28:42.977 clat (usec): min=1847, max=46524, avg=4425.15, stdev=1499.14 00:28:42.977 lat (usec): min=1867, max=46538, avg=4438.83, stdev=1498.68 00:28:42.977 clat percentiles (usec): 00:28:42.977 | 1.00th=[ 3064], 5.00th=[ 3523], 10.00th=[ 3621], 20.00th=[ 3818], 00:28:42.977 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4293], 00:28:42.977 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 5932], 95.00th=[ 6194], 00:28:42.977 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 7701], 99.95th=[46400], 00:28:42.977 | 99.99th=[46400] 00:28:42.977 bw ( KiB/s): min=13755, max=14960, per=24.57%, avg=14324.30, stdev=307.96, samples=10 00:28:42.977 iops : min= 1719, max= 1870, avg=1790.50, stdev=38.57, samples=10 00:28:42.977 lat (msec) : 2=0.03%, 4=29.55%, 10=70.33%, 50=0.09% 00:28:42.977 cpu : usr=95.48%, sys=4.10%, ctx=12, majf=0, minf=0 00:28:42.977 IO depths : 1=0.1%, 2=1.6%, 4=71.0%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:42.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.977 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.977 issued rwts: total=8959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.977 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:42.977 filename0: (groupid=0, jobs=1): err= 0: pid=1278032: Mon Jul 15 16:10:09 2024 00:28:42.977 read: IOPS=1798, BW=14.0MiB/s (14.7MB/s)(70.3MiB/5002msec) 00:28:42.977 slat (nsec): min=4295, max=61907, avg=13011.47, stdev=5449.65 00:28:42.977 clat (usec): min=1433, max=7373, avg=4409.24, stdev=730.12 00:28:42.977 lat (usec): min=1446, max=7381, avg=4422.25, stdev=729.43 00:28:42.977 clat percentiles (usec): 00:28:42.977 | 1.00th=[ 3064], 5.00th=[ 3687], 10.00th=[ 3851], 20.00th=[ 3949], 00:28:42.977 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4293], 00:28:42.977 | 70.00th=[ 4424], 80.00th=[ 4752], 90.00th=[ 5735], 95.00th=[ 6063], 00:28:42.977 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 7242], 99.95th=[ 7242], 00:28:42.977 | 99.99th=[ 7373] 00:28:42.977 bw ( KiB/s): min=13616, max=15200, per=24.70%, avg=14401.78, stdev=449.32, samples=9 00:28:42.977 iops : min= 1702, max= 1900, avg=1800.22, stdev=56.16, samples=9 00:28:42.977 lat (msec) : 2=0.02%, 4=26.43%, 10=73.54% 00:28:42.977 cpu : usr=95.34%, sys=4.18%, ctx=8, majf=0, minf=9 00:28:42.977 IO depths : 1=0.2%, 2=4.5%, 4=65.6%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:42.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.977 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.977 issued rwts: total=8996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.977 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:42.977 filename1: (groupid=0, jobs=1): err= 0: pid=1278033: Mon Jul 15 16:10:09 2024 00:28:42.977 read: IOPS=1874, BW=14.6MiB/s (15.4MB/s)(73.3MiB/5004msec) 00:28:42.977 slat (nsec): min=3725, max=75743, avg=14130.91, stdev=5365.60 00:28:42.977 clat (usec): min=830, max=8002, avg=4222.20, stdev=717.77 00:28:42.977 lat (usec): min=862, max=8015, avg=4236.33, stdev=717.68 00:28:42.977 clat percentiles (usec): 00:28:42.977 | 1.00th=[ 2769], 5.00th=[ 3261], 10.00th=[ 3589], 20.00th=[ 3785], 00:28:42.977 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4228], 00:28:42.977 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 5145], 95.00th=[ 5932], 00:28:42.977 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 7242], 99.95th=[ 7832], 00:28:42.977 | 99.99th=[ 8029] 00:28:42.977 bw ( KiB/s): min=14288, max=16096, per=25.73%, avg=15003.20, stdev=495.29, samples=10 00:28:42.977 iops : min= 1786, max= 2012, avg=1875.40, stdev=61.91, samples=10 00:28:42.977 lat (usec) : 1000=0.01% 00:28:42.977 lat (msec) : 2=0.03%, 4=37.65%, 10=62.31% 00:28:42.977 cpu : usr=94.22%, sys=4.96%, ctx=13, majf=0, minf=0 00:28:42.977 IO depths : 1=0.1%, 2=3.4%, 4=68.5%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:42.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.977 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.977 issued rwts: total=9382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.977 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:42.977 filename1: (groupid=0, jobs=1): err= 0: pid=1278034: Mon Jul 15 16:10:09 2024 00:28:42.977 read: IOPS=1825, BW=14.3MiB/s (15.0MB/s)(71.3MiB/5002msec) 00:28:42.977 slat (nsec): min=4256, max=48554, avg=12137.22, stdev=4660.67 00:28:42.977 clat (usec): min=1677, max=7767, avg=4346.67, stdev=737.24 00:28:42.977 lat (usec): min=1685, max=7780, avg=4358.80, stdev=736.70 00:28:42.977 clat percentiles (usec): 00:28:42.977 | 1.00th=[ 2966], 5.00th=[ 3556], 10.00th=[ 3687], 20.00th=[ 3916], 00:28:42.977 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4178], 60.00th=[ 4293], 00:28:42.977 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 5604], 95.00th=[ 6063], 00:28:42.977 | 99.00th=[ 6521], 99.50th=[ 6587], 99.90th=[ 7242], 99.95th=[ 7701], 00:28:42.977 | 99.99th=[ 7767] 00:28:42.977 bw ( KiB/s): min=14352, max=15296, per=25.04%, avg=14601.60, stdev=316.44, samples=10 00:28:42.977 iops : min= 1794, max= 1912, avg=1825.20, stdev=39.56, samples=10 00:28:42.977 lat (msec) : 2=0.08%, 4=34.72%, 10=65.21% 00:28:42.977 cpu : usr=94.92%, sys=4.58%, ctx=14, majf=0, minf=9 00:28:42.977 IO depths : 1=0.1%, 2=2.3%, 4=68.4%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:42.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.977 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.977 issued rwts: total=9131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.977 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:42.977 00:28:42.977 Run status group 0 (all jobs): 00:28:42.977 READ: bw=56.9MiB/s (59.7MB/s), 14.0MiB/s-14.6MiB/s (14.7MB/s-15.4MB/s), io=285MiB (299MB), run=5002-5004msec 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:42.977 16:10:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:42.978 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.978 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:42.978 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.978 16:10:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:42.978 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.978 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:42.978 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.978 00:28:42.978 real 0m24.255s 00:28:42.978 user 4m30.438s 00:28:42.978 sys 0m7.994s 00:28:42.978 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:42.978 16:10:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:42.978 ************************************ 00:28:42.978 END TEST fio_dif_rand_params 00:28:42.978 ************************************ 00:28:42.978 16:10:09 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:42.978 16:10:09 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:42.978 16:10:09 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:42.978 16:10:09 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.978 16:10:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:42.978 ************************************ 00:28:42.978 START TEST fio_dif_digest 00:28:42.978 ************************************ 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:42.978 bdev_null0 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:42.978 [2024-07-15 16:10:09.826715] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.978 { 00:28:42.978 "params": { 00:28:42.978 "name": "Nvme$subsystem", 00:28:42.978 "trtype": "$TEST_TRANSPORT", 00:28:42.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.978 "adrfam": "ipv4", 00:28:42.978 "trsvcid": "$NVMF_PORT", 00:28:42.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.978 "hdgst": ${hdgst:-false}, 00:28:42.978 "ddgst": ${ddgst:-false} 00:28:42.978 }, 00:28:42.978 "method": "bdev_nvme_attach_controller" 00:28:42.978 } 00:28:42.978 EOF 00:28:42.978 )") 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:42.978 "params": { 00:28:42.978 "name": "Nvme0", 00:28:42.978 "trtype": "tcp", 00:28:42.978 "traddr": "10.0.0.2", 00:28:42.978 "adrfam": "ipv4", 00:28:42.978 "trsvcid": "4420", 00:28:42.978 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:42.978 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:42.978 "hdgst": true, 00:28:42.978 "ddgst": true 00:28:42.978 }, 00:28:42.978 "method": "bdev_nvme_attach_controller" 00:28:42.978 }' 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:42.978 16:10:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.259 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:43.259 ... 00:28:43.259 fio-3.35 00:28:43.259 Starting 3 threads 00:28:43.259 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.480 00:28:55.480 filename0: (groupid=0, jobs=1): err= 0: pid=1278925: Mon Jul 15 16:10:20 2024 00:28:55.480 read: IOPS=184, BW=23.0MiB/s (24.2MB/s)(232MiB/10045msec) 00:28:55.480 slat (nsec): min=4850, max=47281, avg=14812.84, stdev=3874.08 00:28:55.480 clat (usec): min=12081, max=57390, avg=16231.10, stdev=2393.34 00:28:55.480 lat (usec): min=12094, max=57404, avg=16245.91, stdev=2393.24 00:28:55.480 clat percentiles (usec): 00:28:55.480 | 1.00th=[13304], 5.00th=[14091], 10.00th=[14484], 20.00th=[15139], 00:28:55.480 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16057], 60.00th=[16450], 00:28:55.480 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:28:55.480 | 99.00th=[19530], 99.50th=[20317], 99.90th=[56886], 99.95th=[57410], 00:28:55.480 | 99.99th=[57410] 00:28:55.480 bw ( KiB/s): min=21248, max=24576, per=32.02%, avg=23682.25, stdev=747.05, samples=20 00:28:55.480 iops : min= 166, max= 192, avg=185.00, stdev= 5.86, samples=20 00:28:55.480 lat (msec) : 20=99.41%, 50=0.38%, 100=0.22% 00:28:55.480 cpu : usr=90.33%, sys=9.17%, ctx=25, majf=0, minf=145 00:28:55.480 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.480 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.480 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:55.480 filename0: (groupid=0, jobs=1): err= 0: pid=1278926: Mon Jul 15 16:10:20 2024 00:28:55.480 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(244MiB/10046msec) 00:28:55.480 slat (nsec): min=7601, max=45744, avg=14213.63, stdev=3766.43 00:28:55.480 clat (usec): min=9963, max=50081, avg=15430.02, stdev=1605.25 00:28:55.480 lat (usec): min=9975, max=50094, avg=15444.24, stdev=1605.20 00:28:55.480 clat percentiles (usec): 00:28:55.480 | 1.00th=[12518], 5.00th=[13435], 10.00th=[13960], 20.00th=[14484], 00:28:55.481 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:28:55.481 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:28:55.481 | 99.00th=[18220], 99.50th=[18744], 99.90th=[46924], 99.95th=[50070], 00:28:55.481 | 99.99th=[50070] 00:28:55.481 bw ( KiB/s): min=24064, max=25856, per=33.68%, avg=24908.80, stdev=463.19, samples=20 00:28:55.481 iops : min= 188, max= 202, avg=194.60, stdev= 3.62, samples=20 00:28:55.481 lat (msec) : 10=0.05%, 20=99.79%, 50=0.10%, 100=0.05% 00:28:55.481 cpu : usr=89.91%, sys=9.59%, ctx=30, majf=0, minf=113 00:28:55.481 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.481 issued rwts: total=1948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.481 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:55.481 filename0: (groupid=0, jobs=1): err= 0: pid=1278927: Mon Jul 15 16:10:20 2024 00:28:55.481 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(251MiB/10046msec) 00:28:55.481 slat (nsec): min=7106, max=89781, avg=14748.40, stdev=4907.87 00:28:55.481 clat (usec): min=9666, max=47662, avg=14977.10, stdev=1364.92 00:28:55.481 lat (usec): min=9680, max=47675, avg=14991.85, stdev=1364.72 00:28:55.481 clat percentiles (usec): 00:28:55.481 | 1.00th=[11863], 5.00th=[13042], 10.00th=[13566], 20.00th=[14091], 00:28:55.481 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:28:55.481 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16450], 95.00th=[16909], 00:28:55.481 | 99.00th=[17433], 99.50th=[17957], 99.90th=[20841], 99.95th=[20841], 00:28:55.481 | 99.99th=[47449] 00:28:55.481 bw ( KiB/s): min=24832, max=26880, per=34.67%, avg=25638.40, stdev=565.00, samples=20 00:28:55.481 iops : min= 194, max= 210, avg=200.30, stdev= 4.41, samples=20 00:28:55.481 lat (msec) : 10=0.05%, 20=99.75%, 50=0.20% 00:28:55.481 cpu : usr=89.75%, sys=9.75%, ctx=49, majf=0, minf=130 00:28:55.481 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.481 issued rwts: total=2004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.481 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:55.481 00:28:55.481 Run status group 0 (all jobs): 00:28:55.481 READ: bw=72.2MiB/s (75.7MB/s), 23.0MiB/s-24.9MiB/s (24.2MB/s-26.1MB/s), io=726MiB (761MB), run=10045-10046msec 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.481 00:28:55.481 real 0m11.084s 00:28:55.481 user 0m28.110s 00:28:55.481 sys 0m3.128s 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:55.481 16:10:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:55.481 ************************************ 00:28:55.481 END TEST fio_dif_digest 00:28:55.481 ************************************ 00:28:55.481 16:10:20 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:55.481 16:10:20 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:55.481 16:10:20 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:55.481 16:10:20 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:55.481 16:10:20 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:55.481 16:10:20 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:55.481 16:10:20 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:55.481 16:10:20 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:55.481 16:10:20 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:55.481 rmmod nvme_tcp 00:28:55.481 rmmod nvme_fabrics 00:28:55.481 rmmod nvme_keyring 00:28:55.481 16:10:20 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:55.481 16:10:20 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:55.481 16:10:20 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:55.481 16:10:20 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1272724 ']' 00:28:55.481 16:10:20 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1272724 00:28:55.481 16:10:20 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1272724 ']' 00:28:55.481 16:10:20 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1272724 00:28:55.481 16:10:20 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:28:55.481 16:10:20 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:55.481 16:10:20 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1272724 00:28:55.481 16:10:21 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:55.481 16:10:21 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:55.481 16:10:21 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1272724' 00:28:55.481 killing process with pid 1272724 00:28:55.481 16:10:21 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1272724 00:28:55.481 16:10:21 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1272724 00:28:55.481 16:10:21 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:55.481 16:10:21 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:55.481 Waiting for block devices as requested 00:28:55.481 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:28:55.740 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:55.740 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:55.999 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:55.999 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:55.999 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:55.999 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:56.256 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:56.256 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:56.256 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:56.256 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:56.513 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:56.513 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:56.513 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:56.513 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:56.770 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:56.770 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:56.770 16:10:23 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:56.770 16:10:23 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:56.770 16:10:23 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:56.770 16:10:23 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:56.770 16:10:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.770 16:10:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:56.770 16:10:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.300 16:10:25 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:59.300 00:28:59.300 real 1m6.744s 00:28:59.300 user 6m26.826s 00:28:59.300 sys 0m19.762s 00:28:59.300 16:10:25 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:59.300 16:10:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:59.300 ************************************ 00:28:59.300 END TEST nvmf_dif 00:28:59.300 ************************************ 00:28:59.300 16:10:25 -- common/autotest_common.sh@1142 -- # return 0 00:28:59.300 16:10:25 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:59.300 16:10:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:59.300 16:10:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:59.300 16:10:25 -- common/autotest_common.sh@10 -- # set +x 00:28:59.300 ************************************ 00:28:59.300 START TEST nvmf_abort_qd_sizes 00:28:59.300 ************************************ 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:59.300 * Looking for test storage... 00:28:59.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:59.300 16:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:01.200 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:01.200 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:01.200 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.200 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:01.200 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:01.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:29:01.201 00:29:01.201 --- 10.0.0.2 ping statistics --- 00:29:01.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.201 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:29:01.201 00:29:01.201 --- 10.0.0.1 ping statistics --- 00:29:01.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.201 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:01.201 16:10:27 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:02.137 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:02.137 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:02.137 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:02.137 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:02.137 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:02.137 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:02.137 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:02.137 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:02.396 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:02.396 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:02.396 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:02.396 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:02.396 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:02.396 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:02.396 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:02.396 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:03.328 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1283715 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1283715 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1283715 ']' 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:03.328 16:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:03.328 [2024-07-15 16:10:30.250297] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:29:03.328 [2024-07-15 16:10:30.250369] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.586 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.586 [2024-07-15 16:10:30.317267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:03.586 [2024-07-15 16:10:30.435365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.586 [2024-07-15 16:10:30.435418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.586 [2024-07-15 16:10:30.435441] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.586 [2024-07-15 16:10:30.435451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.586 [2024-07-15 16:10:30.435461] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.586 [2024-07-15 16:10:30.435551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.586 [2024-07-15 16:10:30.435590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.586 [2024-07-15 16:10:30.435644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:03.586 [2024-07-15 16:10:30.435647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.517 16:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:04.517 ************************************ 00:29:04.517 START TEST spdk_target_abort 00:29:04.517 ************************************ 00:29:04.517 16:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:29:04.517 16:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:04.517 16:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:29:04.517 16:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.517 16:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.846 spdk_targetn1 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.846 [2024-07-15 16:10:34.081480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.846 [2024-07-15 16:10:34.113754] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:07.846 16:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:07.846 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.439 Initializing NVMe Controllers 00:29:10.439 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:10.439 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:10.439 Initialization complete. Launching workers. 00:29:10.439 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11085, failed: 0 00:29:10.439 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1357, failed to submit 9728 00:29:10.439 success 780, unsuccess 577, failed 0 00:29:10.439 16:10:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:10.439 16:10:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:10.439 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.718 Initializing NVMe Controllers 00:29:13.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:13.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:13.718 Initialization complete. Launching workers. 00:29:13.718 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8703, failed: 0 00:29:13.718 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 7456 00:29:13.718 success 320, unsuccess 927, failed 0 00:29:13.718 16:10:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:13.718 16:10:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:13.718 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.991 Initializing NVMe Controllers 00:29:16.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:16.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:16.991 Initialization complete. Launching workers. 00:29:16.991 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29711, failed: 0 00:29:16.991 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2673, failed to submit 27038 00:29:16.991 success 517, unsuccess 2156, failed 0 00:29:16.991 16:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:16.991 16:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.991 16:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:16.991 16:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.991 16:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:16.991 16:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.991 16:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:18.362 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.362 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1283715 00:29:18.362 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1283715 ']' 00:29:18.362 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1283715 00:29:18.362 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:18.362 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:18.362 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1283715 00:29:18.362 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:18.362 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:18.362 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1283715' 00:29:18.362 killing process with pid 1283715 00:29:18.362 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1283715 00:29:18.362 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1283715 00:29:18.620 00:29:18.620 real 0m14.191s 00:29:18.620 user 0m56.338s 00:29:18.620 sys 0m2.513s 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:18.620 ************************************ 00:29:18.620 END TEST spdk_target_abort 00:29:18.620 ************************************ 00:29:18.620 16:10:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:18.620 16:10:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:18.620 16:10:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:18.620 16:10:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:18.620 16:10:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:18.620 ************************************ 00:29:18.620 START TEST kernel_target_abort 00:29:18.620 ************************************ 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:18.620 16:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:19.992 Waiting for block devices as requested 00:29:19.992 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:19.992 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:19.992 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:20.250 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:20.250 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:20.250 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:20.250 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:20.250 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:20.508 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:20.508 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:20.508 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:20.508 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:20.767 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:20.767 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:20.767 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:20.767 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:21.024 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:21.024 No valid GPT data, bailing 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:21.024 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:29:21.281 00:29:21.281 Discovery Log Number of Records 2, Generation counter 2 00:29:21.281 =====Discovery Log Entry 0====== 00:29:21.281 trtype: tcp 00:29:21.281 adrfam: ipv4 00:29:21.281 subtype: current discovery subsystem 00:29:21.281 treq: not specified, sq flow control disable supported 00:29:21.281 portid: 1 00:29:21.281 trsvcid: 4420 00:29:21.281 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:21.281 traddr: 10.0.0.1 00:29:21.281 eflags: none 00:29:21.281 sectype: none 00:29:21.281 =====Discovery Log Entry 1====== 00:29:21.281 trtype: tcp 00:29:21.281 adrfam: ipv4 00:29:21.281 subtype: nvme subsystem 00:29:21.281 treq: not specified, sq flow control disable supported 00:29:21.281 portid: 1 00:29:21.281 trsvcid: 4420 00:29:21.281 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:21.281 traddr: 10.0.0.1 00:29:21.281 eflags: none 00:29:21.281 sectype: none 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:21.281 16:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:21.281 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.551 Initializing NVMe Controllers 00:29:24.551 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:24.551 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:24.551 Initialization complete. Launching workers. 00:29:24.551 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30537, failed: 0 00:29:24.551 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30537, failed to submit 0 00:29:24.551 success 0, unsuccess 30537, failed 0 00:29:24.551 16:10:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:24.551 16:10:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:24.551 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.859 Initializing NVMe Controllers 00:29:27.859 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:27.859 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:27.859 Initialization complete. Launching workers. 00:29:27.859 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61460, failed: 0 00:29:27.859 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15510, failed to submit 45950 00:29:27.859 success 0, unsuccess 15510, failed 0 00:29:27.859 16:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:27.859 16:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:27.859 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.443 Initializing NVMe Controllers 00:29:30.443 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:30.443 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:30.443 Initialization complete. Launching workers. 00:29:30.443 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59640, failed: 0 00:29:30.443 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14890, failed to submit 44750 00:29:30.443 success 0, unsuccess 14890, failed 0 00:29:30.443 16:10:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:30.443 16:10:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:30.443 16:10:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:30.443 16:10:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:30.443 16:10:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:30.443 16:10:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:30.443 16:10:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:30.443 16:10:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:30.443 16:10:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:30.701 16:10:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:31.637 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:31.637 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:31.637 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:31.637 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:31.637 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:31.637 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:31.637 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:31.637 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:31.637 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:31.637 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:31.637 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:31.637 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:31.637 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:31.637 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:31.637 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:31.896 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:32.830 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:29:32.830 00:29:32.830 real 0m14.092s 00:29:32.830 user 0m4.948s 00:29:32.830 sys 0m3.323s 00:29:32.830 16:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:32.830 16:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.830 ************************************ 00:29:32.830 END TEST kernel_target_abort 00:29:32.830 ************************************ 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:32.830 rmmod nvme_tcp 00:29:32.830 rmmod nvme_fabrics 00:29:32.830 rmmod nvme_keyring 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1283715 ']' 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1283715 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1283715 ']' 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1283715 00:29:32.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1283715) - No such process 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1283715 is not found' 00:29:32.830 Process with pid 1283715 is not found 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:32.830 16:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:34.208 Waiting for block devices as requested 00:29:34.208 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:34.208 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:34.208 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:34.208 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:34.468 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:34.468 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:34.468 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:34.468 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:34.468 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:34.729 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:34.729 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:34.729 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:34.988 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:34.988 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:34.988 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:34.988 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:35.247 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:35.247 16:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:35.247 16:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:35.247 16:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:35.247 16:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:35.247 16:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.247 16:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:35.247 16:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.780 16:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:37.780 00:29:37.780 real 0m38.335s 00:29:37.780 user 1m3.551s 00:29:37.780 sys 0m9.231s 00:29:37.780 16:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:37.780 16:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:37.780 ************************************ 00:29:37.780 END TEST nvmf_abort_qd_sizes 00:29:37.780 ************************************ 00:29:37.781 16:11:04 -- common/autotest_common.sh@1142 -- # return 0 00:29:37.781 16:11:04 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:37.781 16:11:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:37.781 16:11:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:37.781 16:11:04 -- common/autotest_common.sh@10 -- # set +x 00:29:37.781 ************************************ 00:29:37.781 START TEST keyring_file 00:29:37.781 ************************************ 00:29:37.781 16:11:04 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:37.781 * Looking for test storage... 00:29:37.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:37.781 16:11:04 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.781 16:11:04 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.781 16:11:04 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.781 16:11:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.781 16:11:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.781 16:11:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.781 16:11:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:37.781 16:11:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Mwx89twnt9 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Mwx89twnt9 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Mwx89twnt9 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Mwx89twnt9 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sb5SPN2Emz 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:37.781 16:11:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sb5SPN2Emz 00:29:37.781 16:11:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sb5SPN2Emz 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.sb5SPN2Emz 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=1289487 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:37.781 16:11:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1289487 00:29:37.781 16:11:04 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1289487 ']' 00:29:37.781 16:11:04 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.781 16:11:04 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:37.781 16:11:04 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.781 16:11:04 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:37.781 16:11:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:37.781 [2024-07-15 16:11:04.355754] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:29:37.781 [2024-07-15 16:11:04.355845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289487 ] 00:29:37.781 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.781 [2024-07-15 16:11:04.416444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.781 [2024-07-15 16:11:04.531395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.720 16:11:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.720 16:11:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:38.720 16:11:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:38.720 16:11:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.720 16:11:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:38.720 [2024-07-15 16:11:05.297564] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.720 null0 00:29:38.720 [2024-07-15 16:11:05.329616] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:38.720 [2024-07-15 16:11:05.330076] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:38.720 [2024-07-15 16:11:05.337624] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:38.720 16:11:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.720 16:11:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:38.720 16:11:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:38.720 16:11:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:38.720 16:11:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:38.721 [2024-07-15 16:11:05.349640] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:38.721 request: 00:29:38.721 { 00:29:38.721 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:38.721 "secure_channel": false, 00:29:38.721 "listen_address": { 00:29:38.721 "trtype": "tcp", 00:29:38.721 "traddr": "127.0.0.1", 00:29:38.721 "trsvcid": "4420" 00:29:38.721 }, 00:29:38.721 "method": "nvmf_subsystem_add_listener", 00:29:38.721 "req_id": 1 00:29:38.721 } 00:29:38.721 Got JSON-RPC error response 00:29:38.721 response: 00:29:38.721 { 00:29:38.721 "code": -32602, 00:29:38.721 "message": "Invalid parameters" 00:29:38.721 } 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:38.721 16:11:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=1289625 00:29:38.721 16:11:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1289625 /var/tmp/bperf.sock 00:29:38.721 16:11:05 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1289625 ']' 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:38.721 16:11:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:38.721 [2024-07-15 16:11:05.399788] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:29:38.721 [2024-07-15 16:11:05.399995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289625 ] 00:29:38.721 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.721 [2024-07-15 16:11:05.459825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.721 [2024-07-15 16:11:05.571593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.979 16:11:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.979 16:11:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:38.979 16:11:05 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Mwx89twnt9 00:29:38.980 16:11:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Mwx89twnt9 00:29:39.240 16:11:05 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sb5SPN2Emz 00:29:39.240 16:11:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sb5SPN2Emz 00:29:39.500 16:11:06 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:39.500 16:11:06 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:39.500 16:11:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:39.500 16:11:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:39.500 16:11:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:39.758 16:11:06 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Mwx89twnt9 == \/\t\m\p\/\t\m\p\.\M\w\x\8\9\t\w\n\t\9 ]] 00:29:39.758 16:11:06 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:39.758 16:11:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:39.758 16:11:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:39.758 16:11:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:39.758 16:11:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:40.016 16:11:06 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.sb5SPN2Emz == \/\t\m\p\/\t\m\p\.\s\b\5\S\P\N\2\E\m\z ]] 00:29:40.016 16:11:06 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:40.016 16:11:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:40.016 16:11:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:40.016 16:11:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.016 16:11:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:40.016 16:11:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.016 16:11:06 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:40.016 16:11:06 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:40.016 16:11:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:40.016 16:11:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:40.016 16:11:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.016 16:11:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.016 16:11:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:40.274 16:11:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:40.274 16:11:07 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:40.274 16:11:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:40.531 [2024-07-15 16:11:07.421736] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:40.789 nvme0n1 00:29:40.789 16:11:07 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:40.789 16:11:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:40.789 16:11:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:40.789 16:11:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.789 16:11:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:40.789 16:11:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:41.046 16:11:07 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:41.046 16:11:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:41.046 16:11:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:41.046 16:11:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:41.046 16:11:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:41.046 16:11:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:41.046 16:11:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:41.304 16:11:08 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:41.304 16:11:08 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:41.304 Running I/O for 1 seconds... 00:29:42.237 00:29:42.237 Latency(us) 00:29:42.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.237 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:42.237 nvme0n1 : 1.03 4282.42 16.73 0.00 0.00 29500.56 5097.24 37282.70 00:29:42.237 =================================================================================================================== 00:29:42.237 Total : 4282.42 16.73 0.00 0.00 29500.56 5097.24 37282.70 00:29:42.237 0 00:29:42.237 16:11:09 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:42.237 16:11:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:42.494 16:11:09 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:42.494 16:11:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:42.494 16:11:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:42.494 16:11:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:42.494 16:11:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:42.494 16:11:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:42.751 16:11:09 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:42.751 16:11:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:42.751 16:11:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:42.751 16:11:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:42.751 16:11:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:42.751 16:11:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:42.751 16:11:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.008 16:11:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:43.008 16:11:09 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:43.008 16:11:09 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:43.008 16:11:09 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:43.008 16:11:09 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:43.008 16:11:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:43.008 16:11:09 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:43.008 16:11:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:43.008 16:11:09 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:43.008 16:11:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:43.266 [2024-07-15 16:11:10.162550] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:43.266 [2024-07-15 16:11:10.162958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb189a0 (107): Transport endpoint is not connected 00:29:43.266 [2024-07-15 16:11:10.163948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb189a0 (9): Bad file descriptor 00:29:43.266 [2024-07-15 16:11:10.164945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:43.266 [2024-07-15 16:11:10.164965] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:43.266 [2024-07-15 16:11:10.164979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:43.266 request: 00:29:43.266 { 00:29:43.266 "name": "nvme0", 00:29:43.266 "trtype": "tcp", 00:29:43.266 "traddr": "127.0.0.1", 00:29:43.266 "adrfam": "ipv4", 00:29:43.266 "trsvcid": "4420", 00:29:43.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:43.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:43.266 "prchk_reftag": false, 00:29:43.266 "prchk_guard": false, 00:29:43.266 "hdgst": false, 00:29:43.266 "ddgst": false, 00:29:43.266 "psk": "key1", 00:29:43.266 "method": "bdev_nvme_attach_controller", 00:29:43.266 "req_id": 1 00:29:43.266 } 00:29:43.266 Got JSON-RPC error response 00:29:43.266 response: 00:29:43.266 { 00:29:43.266 "code": -5, 00:29:43.266 "message": "Input/output error" 00:29:43.266 } 00:29:43.266 16:11:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:43.266 16:11:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:43.266 16:11:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:43.266 16:11:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:43.266 16:11:10 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:43.266 16:11:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:43.266 16:11:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:43.266 16:11:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.266 16:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.266 16:11:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:43.524 16:11:10 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:43.524 16:11:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:43.524 16:11:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:43.524 16:11:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:43.524 16:11:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.524 16:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.524 16:11:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:43.820 16:11:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:43.820 16:11:10 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:43.820 16:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:44.104 16:11:10 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:44.104 16:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:44.362 16:11:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:44.362 16:11:11 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:44.362 16:11:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.620 16:11:11 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:44.620 16:11:11 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Mwx89twnt9 00:29:44.620 16:11:11 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Mwx89twnt9 00:29:44.620 16:11:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:44.620 16:11:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Mwx89twnt9 00:29:44.620 16:11:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:44.620 16:11:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:44.620 16:11:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:44.620 16:11:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:44.620 16:11:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Mwx89twnt9 00:29:44.620 16:11:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Mwx89twnt9 00:29:44.877 [2024-07-15 16:11:11.677784] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Mwx89twnt9': 0100660 00:29:44.877 [2024-07-15 16:11:11.677833] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:44.877 request: 00:29:44.877 { 00:29:44.877 "name": "key0", 00:29:44.877 "path": "/tmp/tmp.Mwx89twnt9", 00:29:44.877 "method": "keyring_file_add_key", 00:29:44.877 "req_id": 1 00:29:44.877 } 00:29:44.877 Got JSON-RPC error response 00:29:44.877 response: 00:29:44.877 { 00:29:44.877 "code": -1, 00:29:44.877 "message": "Operation not permitted" 00:29:44.877 } 00:29:44.877 16:11:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:44.877 16:11:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:44.877 16:11:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:44.877 16:11:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:44.877 16:11:11 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Mwx89twnt9 00:29:44.878 16:11:11 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Mwx89twnt9 00:29:44.878 16:11:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Mwx89twnt9 00:29:45.135 16:11:11 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Mwx89twnt9 00:29:45.135 16:11:11 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:45.135 16:11:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:45.135 16:11:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:45.135 16:11:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:45.135 16:11:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:45.135 16:11:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:45.401 16:11:12 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:45.401 16:11:12 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:45.401 16:11:12 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:45.402 16:11:12 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:45.402 16:11:12 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:45.402 16:11:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:45.402 16:11:12 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:45.402 16:11:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:45.402 16:11:12 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:45.402 16:11:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:45.660 [2024-07-15 16:11:12.439894] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Mwx89twnt9': No such file or directory 00:29:45.660 [2024-07-15 16:11:12.439945] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:45.660 [2024-07-15 16:11:12.439973] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:45.660 [2024-07-15 16:11:12.439998] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:45.660 [2024-07-15 16:11:12.440009] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:45.660 request: 00:29:45.660 { 00:29:45.660 "name": "nvme0", 00:29:45.660 "trtype": "tcp", 00:29:45.660 "traddr": "127.0.0.1", 00:29:45.660 "adrfam": "ipv4", 00:29:45.660 "trsvcid": "4420", 00:29:45.660 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:45.660 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:45.660 "prchk_reftag": false, 00:29:45.660 "prchk_guard": false, 00:29:45.660 "hdgst": false, 00:29:45.660 "ddgst": false, 00:29:45.661 "psk": "key0", 00:29:45.661 "method": "bdev_nvme_attach_controller", 00:29:45.661 "req_id": 1 00:29:45.661 } 00:29:45.661 Got JSON-RPC error response 00:29:45.661 response: 00:29:45.661 { 00:29:45.661 "code": -19, 00:29:45.661 "message": "No such device" 00:29:45.661 } 00:29:45.661 16:11:12 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:45.661 16:11:12 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:45.661 16:11:12 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:45.661 16:11:12 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:45.661 16:11:12 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:45.661 16:11:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:45.919 16:11:12 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:45.919 16:11:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:45.919 16:11:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:45.919 16:11:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:45.919 16:11:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:45.919 16:11:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:45.919 16:11:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.n1kWfFE2Yn 00:29:45.919 16:11:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:45.919 16:11:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:45.920 16:11:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:45.920 16:11:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:45.920 16:11:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:45.920 16:11:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:45.920 16:11:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:45.920 16:11:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.n1kWfFE2Yn 00:29:45.920 16:11:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.n1kWfFE2Yn 00:29:45.920 16:11:12 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.n1kWfFE2Yn 00:29:45.920 16:11:12 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n1kWfFE2Yn 00:29:45.920 16:11:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n1kWfFE2Yn 00:29:46.178 16:11:12 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:46.178 16:11:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:46.440 nvme0n1 00:29:46.440 16:11:13 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:46.440 16:11:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:46.440 16:11:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:46.440 16:11:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:46.440 16:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:46.440 16:11:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:46.697 16:11:13 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:46.697 16:11:13 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:46.697 16:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:46.955 16:11:13 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:46.955 16:11:13 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:46.955 16:11:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:46.955 16:11:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:46.955 16:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:47.212 16:11:14 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:47.212 16:11:14 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:47.212 16:11:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:47.212 16:11:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:47.212 16:11:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:47.212 16:11:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:47.212 16:11:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:47.471 16:11:14 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:47.471 16:11:14 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:47.471 16:11:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:47.730 16:11:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:47.730 16:11:14 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:47.730 16:11:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:47.988 16:11:14 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:47.988 16:11:14 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n1kWfFE2Yn 00:29:47.988 16:11:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n1kWfFE2Yn 00:29:48.245 16:11:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sb5SPN2Emz 00:29:48.245 16:11:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sb5SPN2Emz 00:29:48.504 16:11:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:48.504 16:11:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:48.762 nvme0n1 00:29:48.762 16:11:15 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:48.762 16:11:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:49.332 16:11:15 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:49.332 "subsystems": [ 00:29:49.332 { 00:29:49.332 "subsystem": "keyring", 00:29:49.332 "config": [ 00:29:49.332 { 00:29:49.332 "method": "keyring_file_add_key", 00:29:49.332 "params": { 00:29:49.332 "name": "key0", 00:29:49.332 "path": "/tmp/tmp.n1kWfFE2Yn" 00:29:49.332 } 00:29:49.332 }, 00:29:49.332 { 00:29:49.332 "method": "keyring_file_add_key", 00:29:49.332 "params": { 00:29:49.332 "name": "key1", 00:29:49.332 "path": "/tmp/tmp.sb5SPN2Emz" 00:29:49.332 } 00:29:49.332 } 00:29:49.332 ] 00:29:49.332 }, 00:29:49.332 { 00:29:49.332 "subsystem": "iobuf", 00:29:49.332 "config": [ 00:29:49.332 { 00:29:49.332 "method": "iobuf_set_options", 00:29:49.332 "params": { 00:29:49.332 "small_pool_count": 8192, 00:29:49.332 "large_pool_count": 1024, 00:29:49.332 "small_bufsize": 8192, 00:29:49.332 "large_bufsize": 135168 00:29:49.332 } 00:29:49.332 } 00:29:49.332 ] 00:29:49.332 }, 00:29:49.332 { 00:29:49.332 "subsystem": "sock", 00:29:49.332 "config": [ 00:29:49.332 { 00:29:49.332 "method": "sock_set_default_impl", 00:29:49.332 "params": { 00:29:49.332 "impl_name": "posix" 00:29:49.332 } 00:29:49.332 }, 00:29:49.332 { 00:29:49.332 "method": "sock_impl_set_options", 00:29:49.332 "params": { 00:29:49.332 "impl_name": "ssl", 00:29:49.332 "recv_buf_size": 4096, 00:29:49.332 "send_buf_size": 4096, 00:29:49.332 "enable_recv_pipe": true, 00:29:49.332 "enable_quickack": false, 00:29:49.332 "enable_placement_id": 0, 00:29:49.333 "enable_zerocopy_send_server": true, 00:29:49.333 "enable_zerocopy_send_client": false, 00:29:49.333 "zerocopy_threshold": 0, 00:29:49.333 "tls_version": 0, 00:29:49.333 "enable_ktls": false 00:29:49.333 } 00:29:49.333 }, 00:29:49.333 { 00:29:49.333 "method": "sock_impl_set_options", 00:29:49.333 "params": { 00:29:49.333 "impl_name": "posix", 00:29:49.333 "recv_buf_size": 2097152, 00:29:49.333 "send_buf_size": 2097152, 00:29:49.333 "enable_recv_pipe": true, 00:29:49.333 "enable_quickack": false, 00:29:49.333 "enable_placement_id": 0, 00:29:49.333 "enable_zerocopy_send_server": true, 00:29:49.333 "enable_zerocopy_send_client": false, 00:29:49.333 "zerocopy_threshold": 0, 00:29:49.333 "tls_version": 0, 00:29:49.333 "enable_ktls": false 00:29:49.333 } 00:29:49.333 } 00:29:49.333 ] 00:29:49.333 }, 00:29:49.333 { 00:29:49.333 "subsystem": "vmd", 00:29:49.333 "config": [] 00:29:49.333 }, 00:29:49.333 { 00:29:49.333 "subsystem": "accel", 00:29:49.333 "config": [ 00:29:49.333 { 00:29:49.333 "method": "accel_set_options", 00:29:49.333 "params": { 00:29:49.333 "small_cache_size": 128, 00:29:49.333 "large_cache_size": 16, 00:29:49.333 "task_count": 2048, 00:29:49.333 "sequence_count": 2048, 00:29:49.333 "buf_count": 2048 00:29:49.333 } 00:29:49.333 } 00:29:49.333 ] 00:29:49.333 }, 00:29:49.333 { 00:29:49.333 "subsystem": "bdev", 00:29:49.333 "config": [ 00:29:49.333 { 00:29:49.333 "method": "bdev_set_options", 00:29:49.333 "params": { 00:29:49.333 "bdev_io_pool_size": 65535, 00:29:49.333 "bdev_io_cache_size": 256, 00:29:49.333 "bdev_auto_examine": true, 00:29:49.333 "iobuf_small_cache_size": 128, 00:29:49.333 "iobuf_large_cache_size": 16 00:29:49.333 } 00:29:49.333 }, 00:29:49.333 { 00:29:49.333 "method": "bdev_raid_set_options", 00:29:49.333 "params": { 00:29:49.333 "process_window_size_kb": 1024 00:29:49.333 } 00:29:49.333 }, 00:29:49.333 { 00:29:49.333 "method": "bdev_iscsi_set_options", 00:29:49.333 "params": { 00:29:49.333 "timeout_sec": 30 00:29:49.333 } 00:29:49.333 }, 00:29:49.333 { 00:29:49.333 "method": "bdev_nvme_set_options", 00:29:49.333 "params": { 00:29:49.333 "action_on_timeout": "none", 00:29:49.333 "timeout_us": 0, 00:29:49.333 "timeout_admin_us": 0, 00:29:49.333 "keep_alive_timeout_ms": 10000, 00:29:49.333 "arbitration_burst": 0, 00:29:49.333 "low_priority_weight": 0, 00:29:49.333 "medium_priority_weight": 0, 00:29:49.333 "high_priority_weight": 0, 00:29:49.333 "nvme_adminq_poll_period_us": 10000, 00:29:49.333 "nvme_ioq_poll_period_us": 0, 00:29:49.333 "io_queue_requests": 512, 00:29:49.333 "delay_cmd_submit": true, 00:29:49.333 "transport_retry_count": 4, 00:29:49.333 "bdev_retry_count": 3, 00:29:49.333 "transport_ack_timeout": 0, 00:29:49.333 "ctrlr_loss_timeout_sec": 0, 00:29:49.333 "reconnect_delay_sec": 0, 00:29:49.333 "fast_io_fail_timeout_sec": 0, 00:29:49.333 "disable_auto_failback": false, 00:29:49.333 "generate_uuids": false, 00:29:49.333 "transport_tos": 0, 00:29:49.333 "nvme_error_stat": false, 00:29:49.333 "rdma_srq_size": 0, 00:29:49.333 "io_path_stat": false, 00:29:49.333 "allow_accel_sequence": false, 00:29:49.333 "rdma_max_cq_size": 0, 00:29:49.333 "rdma_cm_event_timeout_ms": 0, 00:29:49.333 "dhchap_digests": [ 00:29:49.333 "sha256", 00:29:49.333 "sha384", 00:29:49.333 "sha512" 00:29:49.333 ], 00:29:49.333 "dhchap_dhgroups": [ 00:29:49.333 "null", 00:29:49.333 "ffdhe2048", 00:29:49.333 "ffdhe3072", 00:29:49.333 "ffdhe4096", 00:29:49.333 "ffdhe6144", 00:29:49.333 "ffdhe8192" 00:29:49.333 ] 00:29:49.333 } 00:29:49.333 }, 00:29:49.333 { 00:29:49.333 "method": "bdev_nvme_attach_controller", 00:29:49.333 "params": { 00:29:49.333 "name": "nvme0", 00:29:49.333 "trtype": "TCP", 00:29:49.333 "adrfam": "IPv4", 00:29:49.333 "traddr": "127.0.0.1", 00:29:49.333 "trsvcid": "4420", 00:29:49.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:49.333 "prchk_reftag": false, 00:29:49.333 "prchk_guard": false, 00:29:49.333 "ctrlr_loss_timeout_sec": 0, 00:29:49.333 "reconnect_delay_sec": 0, 00:29:49.333 "fast_io_fail_timeout_sec": 0, 00:29:49.333 "psk": "key0", 00:29:49.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:49.333 "hdgst": false, 00:29:49.333 "ddgst": false 00:29:49.333 } 00:29:49.333 }, 00:29:49.333 { 00:29:49.333 "method": "bdev_nvme_set_hotplug", 00:29:49.333 "params": { 00:29:49.333 "period_us": 100000, 00:29:49.333 "enable": false 00:29:49.333 } 00:29:49.333 }, 00:29:49.333 { 00:29:49.333 "method": "bdev_wait_for_examine" 00:29:49.333 } 00:29:49.333 ] 00:29:49.333 }, 00:29:49.333 { 00:29:49.333 "subsystem": "nbd", 00:29:49.333 "config": [] 00:29:49.333 } 00:29:49.333 ] 00:29:49.333 }' 00:29:49.333 16:11:15 keyring_file -- keyring/file.sh@114 -- # killprocess 1289625 00:29:49.333 16:11:15 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1289625 ']' 00:29:49.333 16:11:15 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1289625 00:29:49.333 16:11:15 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:49.333 16:11:15 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:49.333 16:11:15 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1289625 00:29:49.333 16:11:15 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:49.333 16:11:15 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:49.333 16:11:15 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1289625' 00:29:49.333 killing process with pid 1289625 00:29:49.333 16:11:15 keyring_file -- common/autotest_common.sh@967 -- # kill 1289625 00:29:49.333 Received shutdown signal, test time was about 1.000000 seconds 00:29:49.333 00:29:49.333 Latency(us) 00:29:49.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.333 =================================================================================================================== 00:29:49.333 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:49.333 16:11:15 keyring_file -- common/autotest_common.sh@972 -- # wait 1289625 00:29:49.592 16:11:16 keyring_file -- keyring/file.sh@117 -- # bperfpid=1290968 00:29:49.592 16:11:16 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1290968 /var/tmp/bperf.sock 00:29:49.592 16:11:16 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1290968 ']' 00:29:49.592 16:11:16 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:49.592 16:11:16 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:49.592 16:11:16 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:49.592 16:11:16 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:49.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:49.593 16:11:16 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:49.593 "subsystems": [ 00:29:49.593 { 00:29:49.593 "subsystem": "keyring", 00:29:49.593 "config": [ 00:29:49.593 { 00:29:49.593 "method": "keyring_file_add_key", 00:29:49.593 "params": { 00:29:49.593 "name": "key0", 00:29:49.593 "path": "/tmp/tmp.n1kWfFE2Yn" 00:29:49.593 } 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "method": "keyring_file_add_key", 00:29:49.593 "params": { 00:29:49.593 "name": "key1", 00:29:49.593 "path": "/tmp/tmp.sb5SPN2Emz" 00:29:49.593 } 00:29:49.593 } 00:29:49.593 ] 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "subsystem": "iobuf", 00:29:49.593 "config": [ 00:29:49.593 { 00:29:49.593 "method": "iobuf_set_options", 00:29:49.593 "params": { 00:29:49.593 "small_pool_count": 8192, 00:29:49.593 "large_pool_count": 1024, 00:29:49.593 "small_bufsize": 8192, 00:29:49.593 "large_bufsize": 135168 00:29:49.593 } 00:29:49.593 } 00:29:49.593 ] 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "subsystem": "sock", 00:29:49.593 "config": [ 00:29:49.593 { 00:29:49.593 "method": "sock_set_default_impl", 00:29:49.593 "params": { 00:29:49.593 "impl_name": "posix" 00:29:49.593 } 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "method": "sock_impl_set_options", 00:29:49.593 "params": { 00:29:49.593 "impl_name": "ssl", 00:29:49.593 "recv_buf_size": 4096, 00:29:49.593 "send_buf_size": 4096, 00:29:49.593 "enable_recv_pipe": true, 00:29:49.593 "enable_quickack": false, 00:29:49.593 "enable_placement_id": 0, 00:29:49.593 "enable_zerocopy_send_server": true, 00:29:49.593 "enable_zerocopy_send_client": false, 00:29:49.593 "zerocopy_threshold": 0, 00:29:49.593 "tls_version": 0, 00:29:49.593 "enable_ktls": false 00:29:49.593 } 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "method": "sock_impl_set_options", 00:29:49.593 "params": { 00:29:49.593 "impl_name": "posix", 00:29:49.593 "recv_buf_size": 2097152, 00:29:49.593 "send_buf_size": 2097152, 00:29:49.593 "enable_recv_pipe": true, 00:29:49.593 "enable_quickack": false, 00:29:49.593 "enable_placement_id": 0, 00:29:49.593 "enable_zerocopy_send_server": true, 00:29:49.593 "enable_zerocopy_send_client": false, 00:29:49.593 "zerocopy_threshold": 0, 00:29:49.593 "tls_version": 0, 00:29:49.593 "enable_ktls": false 00:29:49.593 } 00:29:49.593 } 00:29:49.593 ] 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "subsystem": "vmd", 00:29:49.593 "config": [] 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "subsystem": "accel", 00:29:49.593 "config": [ 00:29:49.593 { 00:29:49.593 "method": "accel_set_options", 00:29:49.593 "params": { 00:29:49.593 "small_cache_size": 128, 00:29:49.593 "large_cache_size": 16, 00:29:49.593 "task_count": 2048, 00:29:49.593 "sequence_count": 2048, 00:29:49.593 "buf_count": 2048 00:29:49.593 } 00:29:49.593 } 00:29:49.593 ] 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "subsystem": "bdev", 00:29:49.593 "config": [ 00:29:49.593 { 00:29:49.593 "method": "bdev_set_options", 00:29:49.593 "params": { 00:29:49.593 "bdev_io_pool_size": 65535, 00:29:49.593 "bdev_io_cache_size": 256, 00:29:49.593 "bdev_auto_examine": true, 00:29:49.593 "iobuf_small_cache_size": 128, 00:29:49.593 "iobuf_large_cache_size": 16 00:29:49.593 } 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "method": "bdev_raid_set_options", 00:29:49.593 "params": { 00:29:49.593 "process_window_size_kb": 1024 00:29:49.593 } 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "method": "bdev_iscsi_set_options", 00:29:49.593 "params": { 00:29:49.593 "timeout_sec": 30 00:29:49.593 } 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "method": "bdev_nvme_set_options", 00:29:49.593 "params": { 00:29:49.593 "action_on_timeout": "none", 00:29:49.593 "timeout_us": 0, 00:29:49.593 "timeout_admin_us": 0, 00:29:49.593 "keep_alive_timeout_ms": 10000, 00:29:49.593 "arbitration_burst": 0, 00:29:49.593 "low_priority_weight": 0, 00:29:49.593 "medium_priority_weight": 0, 00:29:49.593 "high_priority_weight": 0, 00:29:49.593 "nvme_adminq_poll_period_us": 10000, 00:29:49.593 "nvme_ioq_poll_period_us": 0, 00:29:49.593 "io_queue_requests": 512, 00:29:49.593 "delay_cmd_submit": true, 00:29:49.593 "transport_retry_count": 4, 00:29:49.593 "bdev_retry_count": 3, 00:29:49.593 "transport_ack_timeout": 0, 00:29:49.593 "ctrlr_loss_timeout_sec": 0, 00:29:49.593 "reconnect_delay_sec": 0, 00:29:49.593 "fast_io_fail_timeout_sec": 0, 00:29:49.593 "disable_auto_failback": false, 00:29:49.593 "generate_uuids": false, 00:29:49.593 "transport_tos": 0, 00:29:49.593 "nvme_error_stat": false, 00:29:49.593 "rdma_srq_size": 0, 00:29:49.593 "io_path_stat": false, 00:29:49.593 "allow_accel_sequence": false, 00:29:49.593 "rdma_max_cq_size": 0, 00:29:49.593 "rdma_cm_event_timeout_ms": 0, 00:29:49.593 "dhchap_digests": [ 00:29:49.593 "sha256", 00:29:49.593 "sha384", 00:29:49.593 "sha512" 00:29:49.593 ], 00:29:49.593 "dhchap_dhgroups": [ 00:29:49.593 "null", 00:29:49.593 "ffdhe2048", 00:29:49.593 "ffdhe3072", 00:29:49.593 "ffdhe4096", 00:29:49.593 "ffdhe6144", 00:29:49.593 "ffdhe8192" 00:29:49.593 ] 00:29:49.593 } 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "method": "bdev_nvme_attach_controller", 00:29:49.593 "params": { 00:29:49.593 "name": "nvme0", 00:29:49.593 "trtype": "TCP", 00:29:49.593 "adrfam": "IPv4", 00:29:49.593 "traddr": "127.0.0.1", 00:29:49.593 "trsvcid": "4420", 00:29:49.593 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:49.593 "prchk_reftag": false, 00:29:49.593 "prchk_guard": false, 00:29:49.593 "ctrlr_loss_timeout_sec": 0, 00:29:49.593 "reconnect_delay_sec": 0, 00:29:49.593 "fast_io_fail_timeout_sec": 0, 00:29:49.593 "psk": "key0", 00:29:49.593 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:49.593 "hdgst": false, 00:29:49.593 "ddgst": false 00:29:49.593 } 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "method": "bdev_nvme_set_hotplug", 00:29:49.593 "params": { 00:29:49.593 "period_us": 100000, 00:29:49.593 "enable": false 00:29:49.593 } 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "method": "bdev_wait_for_examine" 00:29:49.593 } 00:29:49.593 ] 00:29:49.593 }, 00:29:49.593 { 00:29:49.593 "subsystem": "nbd", 00:29:49.593 "config": [] 00:29:49.593 } 00:29:49.593 ] 00:29:49.593 }' 00:29:49.593 16:11:16 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:49.593 16:11:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:49.593 [2024-07-15 16:11:16.321027] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:29:49.593 [2024-07-15 16:11:16.321109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290968 ] 00:29:49.593 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.593 [2024-07-15 16:11:16.380661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.593 [2024-07-15 16:11:16.495071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.851 [2024-07-15 16:11:16.683636] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:50.417 16:11:17 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:50.417 16:11:17 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:50.417 16:11:17 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:50.417 16:11:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.417 16:11:17 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:50.674 16:11:17 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:50.674 16:11:17 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:50.674 16:11:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:50.675 16:11:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:50.675 16:11:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:50.675 16:11:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.675 16:11:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:50.932 16:11:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:50.932 16:11:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:50.932 16:11:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:50.932 16:11:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:50.932 16:11:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:50.932 16:11:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.932 16:11:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:51.190 16:11:18 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:51.190 16:11:18 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:51.190 16:11:18 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:51.190 16:11:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:51.450 16:11:18 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:51.450 16:11:18 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:51.450 16:11:18 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.n1kWfFE2Yn /tmp/tmp.sb5SPN2Emz 00:29:51.450 16:11:18 keyring_file -- keyring/file.sh@20 -- # killprocess 1290968 00:29:51.450 16:11:18 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1290968 ']' 00:29:51.450 16:11:18 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1290968 00:29:51.450 16:11:18 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:51.450 16:11:18 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:51.450 16:11:18 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1290968 00:29:51.450 16:11:18 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:51.450 16:11:18 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:51.450 16:11:18 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1290968' 00:29:51.450 killing process with pid 1290968 00:29:51.450 16:11:18 keyring_file -- common/autotest_common.sh@967 -- # kill 1290968 00:29:51.450 Received shutdown signal, test time was about 1.000000 seconds 00:29:51.450 00:29:51.450 Latency(us) 00:29:51.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.450 =================================================================================================================== 00:29:51.450 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:51.450 16:11:18 keyring_file -- common/autotest_common.sh@972 -- # wait 1290968 00:29:51.710 16:11:18 keyring_file -- keyring/file.sh@21 -- # killprocess 1289487 00:29:51.710 16:11:18 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1289487 ']' 00:29:51.710 16:11:18 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1289487 00:29:51.710 16:11:18 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:51.710 16:11:18 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:51.710 16:11:18 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1289487 00:29:51.710 16:11:18 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:51.710 16:11:18 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:51.710 16:11:18 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1289487' 00:29:51.710 killing process with pid 1289487 00:29:51.710 16:11:18 keyring_file -- common/autotest_common.sh@967 -- # kill 1289487 00:29:51.710 [2024-07-15 16:11:18.583365] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:51.710 16:11:18 keyring_file -- common/autotest_common.sh@972 -- # wait 1289487 00:29:52.279 00:29:52.279 real 0m14.883s 00:29:52.279 user 0m36.026s 00:29:52.279 sys 0m3.324s 00:29:52.279 16:11:19 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:52.279 16:11:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:52.279 ************************************ 00:29:52.279 END TEST keyring_file 00:29:52.279 ************************************ 00:29:52.279 16:11:19 -- common/autotest_common.sh@1142 -- # return 0 00:29:52.279 16:11:19 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:29:52.279 16:11:19 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:52.279 16:11:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:52.279 16:11:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:52.279 16:11:19 -- common/autotest_common.sh@10 -- # set +x 00:29:52.279 ************************************ 00:29:52.279 START TEST keyring_linux 00:29:52.279 ************************************ 00:29:52.279 16:11:19 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:52.279 * Looking for test storage... 00:29:52.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:52.279 16:11:19 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.279 16:11:19 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.279 16:11:19 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.279 16:11:19 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.279 16:11:19 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.279 16:11:19 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.279 16:11:19 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.279 16:11:19 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:52.279 16:11:19 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:52.279 16:11:19 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:52.279 16:11:19 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:52.279 16:11:19 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:52.279 16:11:19 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:52.279 16:11:19 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:52.279 16:11:19 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:52.279 /tmp/:spdk-test:key0 00:29:52.279 16:11:19 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:52.279 16:11:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:52.279 16:11:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:52.540 16:11:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:52.540 16:11:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:52.540 /tmp/:spdk-test:key1 00:29:52.540 16:11:19 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1291453 00:29:52.540 16:11:19 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:52.540 16:11:19 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1291453 00:29:52.540 16:11:19 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1291453 ']' 00:29:52.540 16:11:19 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.540 16:11:19 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.540 16:11:19 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.540 16:11:19 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.540 16:11:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:52.540 [2024-07-15 16:11:19.273748] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:29:52.540 [2024-07-15 16:11:19.273843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291453 ] 00:29:52.540 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.540 [2024-07-15 16:11:19.330397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.540 [2024-07-15 16:11:19.439356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.799 16:11:19 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:52.799 16:11:19 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:52.799 16:11:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:52.799 16:11:19 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.799 16:11:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:52.799 [2024-07-15 16:11:19.698642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.799 null0 00:29:52.799 [2024-07-15 16:11:19.730696] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:52.799 [2024-07-15 16:11:19.731222] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:53.059 16:11:19 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.059 16:11:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:53.059 866088138 00:29:53.059 16:11:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:53.059 840390276 00:29:53.059 16:11:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1291460 00:29:53.059 16:11:19 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:53.059 16:11:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1291460 /var/tmp/bperf.sock 00:29:53.059 16:11:19 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1291460 ']' 00:29:53.059 16:11:19 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:53.059 16:11:19 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:53.059 16:11:19 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:53.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:53.059 16:11:19 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:53.059 16:11:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:53.059 [2024-07-15 16:11:19.796550] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:29:53.059 [2024-07-15 16:11:19.796624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291460 ] 00:29:53.059 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.059 [2024-07-15 16:11:19.856717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.059 [2024-07-15 16:11:19.974778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.317 16:11:20 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.317 16:11:20 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:53.317 16:11:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:53.317 16:11:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:53.576 16:11:20 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:53.576 16:11:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:53.833 16:11:20 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:53.833 16:11:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:54.090 [2024-07-15 16:11:20.827485] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:54.090 nvme0n1 00:29:54.090 16:11:20 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:54.090 16:11:20 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:54.090 16:11:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:54.090 16:11:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:54.090 16:11:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:54.090 16:11:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:54.348 16:11:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:54.348 16:11:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:54.348 16:11:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:54.348 16:11:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:54.348 16:11:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:54.348 16:11:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:54.348 16:11:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:54.605 16:11:21 keyring_linux -- keyring/linux.sh@25 -- # sn=866088138 00:29:54.605 16:11:21 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:54.605 16:11:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:54.605 16:11:21 keyring_linux -- keyring/linux.sh@26 -- # [[ 866088138 == \8\6\6\0\8\8\1\3\8 ]] 00:29:54.605 16:11:21 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 866088138 00:29:54.605 16:11:21 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:54.605 16:11:21 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:54.864 Running I/O for 1 seconds... 00:29:55.799 00:29:55.799 Latency(us) 00:29:55.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.799 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:55.799 nvme0n1 : 1.02 4183.00 16.34 0.00 0.00 30272.35 8641.04 37476.88 00:29:55.799 =================================================================================================================== 00:29:55.799 Total : 4183.00 16.34 0.00 0.00 30272.35 8641.04 37476.88 00:29:55.799 0 00:29:55.799 16:11:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:55.800 16:11:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:56.057 16:11:22 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:56.057 16:11:22 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:56.057 16:11:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:56.057 16:11:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:56.057 16:11:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:56.057 16:11:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:56.315 16:11:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:56.315 16:11:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:56.315 16:11:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:56.315 16:11:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:56.315 16:11:23 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:29:56.315 16:11:23 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:56.315 16:11:23 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:56.315 16:11:23 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:56.315 16:11:23 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:56.315 16:11:23 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:56.315 16:11:23 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:56.315 16:11:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:56.574 [2024-07-15 16:11:23.325971] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:56.574 [2024-07-15 16:11:23.326435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12653f0 (107): Transport endpoint is not connected 00:29:56.574 [2024-07-15 16:11:23.327427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12653f0 (9): Bad file descriptor 00:29:56.574 [2024-07-15 16:11:23.328425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:56.574 [2024-07-15 16:11:23.328453] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:56.574 [2024-07-15 16:11:23.328479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:56.574 request: 00:29:56.574 { 00:29:56.574 "name": "nvme0", 00:29:56.574 "trtype": "tcp", 00:29:56.574 "traddr": "127.0.0.1", 00:29:56.574 "adrfam": "ipv4", 00:29:56.574 "trsvcid": "4420", 00:29:56.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:56.574 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:56.574 "prchk_reftag": false, 00:29:56.574 "prchk_guard": false, 00:29:56.574 "hdgst": false, 00:29:56.574 "ddgst": false, 00:29:56.574 "psk": ":spdk-test:key1", 00:29:56.574 "method": "bdev_nvme_attach_controller", 00:29:56.574 "req_id": 1 00:29:56.574 } 00:29:56.574 Got JSON-RPC error response 00:29:56.574 response: 00:29:56.574 { 00:29:56.574 "code": -5, 00:29:56.574 "message": "Input/output error" 00:29:56.574 } 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@33 -- # sn=866088138 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 866088138 00:29:56.574 1 links removed 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@33 -- # sn=840390276 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 840390276 00:29:56.574 1 links removed 00:29:56.574 16:11:23 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1291460 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1291460 ']' 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1291460 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1291460 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1291460' 00:29:56.574 killing process with pid 1291460 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@967 -- # kill 1291460 00:29:56.574 Received shutdown signal, test time was about 1.000000 seconds 00:29:56.574 00:29:56.574 Latency(us) 00:29:56.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.574 =================================================================================================================== 00:29:56.574 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:56.574 16:11:23 keyring_linux -- common/autotest_common.sh@972 -- # wait 1291460 00:29:56.851 16:11:23 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1291453 00:29:56.852 16:11:23 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1291453 ']' 00:29:56.852 16:11:23 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1291453 00:29:56.852 16:11:23 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:56.852 16:11:23 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:56.852 16:11:23 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1291453 00:29:56.852 16:11:23 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:56.852 16:11:23 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:56.852 16:11:23 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1291453' 00:29:56.852 killing process with pid 1291453 00:29:56.852 16:11:23 keyring_linux -- common/autotest_common.sh@967 -- # kill 1291453 00:29:56.852 16:11:23 keyring_linux -- common/autotest_common.sh@972 -- # wait 1291453 00:29:57.420 00:29:57.420 real 0m5.053s 00:29:57.420 user 0m9.394s 00:29:57.420 sys 0m1.509s 00:29:57.420 16:11:24 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:57.420 16:11:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:57.420 ************************************ 00:29:57.420 END TEST keyring_linux 00:29:57.420 ************************************ 00:29:57.420 16:11:24 -- common/autotest_common.sh@1142 -- # return 0 00:29:57.420 16:11:24 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:57.420 16:11:24 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:57.420 16:11:24 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:57.420 16:11:24 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:29:57.420 16:11:24 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:29:57.420 16:11:24 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:57.420 16:11:24 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:57.420 16:11:24 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:57.420 16:11:24 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:57.420 16:11:24 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:57.420 16:11:24 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:57.420 16:11:24 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:57.420 16:11:24 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:57.420 16:11:24 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:57.420 16:11:24 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:57.420 16:11:24 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:57.420 16:11:24 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:57.420 16:11:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:57.420 16:11:24 -- common/autotest_common.sh@10 -- # set +x 00:29:57.420 16:11:24 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:57.420 16:11:24 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:57.420 16:11:24 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:57.420 16:11:24 -- common/autotest_common.sh@10 -- # set +x 00:29:59.321 INFO: APP EXITING 00:29:59.321 INFO: killing all VMs 00:29:59.321 INFO: killing vhost app 00:29:59.321 INFO: EXIT DONE 00:30:00.255 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:30:00.255 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:30:00.255 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:30:00.255 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:30:00.255 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:30:00.255 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:30:00.255 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:30:00.255 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:30:00.255 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:30:00.255 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:30:00.255 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:30:00.255 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:30:00.511 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:30:00.511 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:30:00.511 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:30:00.511 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:30:00.511 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:30:01.885 Cleaning 00:30:01.885 Removing: /var/run/dpdk/spdk0/config 00:30:01.885 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:01.885 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:01.885 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:01.885 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:01.885 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:01.885 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:01.885 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:01.885 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:01.885 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:01.885 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:01.885 Removing: /var/run/dpdk/spdk1/config 00:30:01.885 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:01.885 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:01.885 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:01.885 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:01.885 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:01.885 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:01.885 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:01.885 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:01.885 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:01.885 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:01.885 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:01.885 Removing: /var/run/dpdk/spdk2/config 00:30:01.885 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:01.885 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:01.885 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:01.885 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:01.885 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:01.885 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:01.885 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:01.885 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:01.885 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:01.885 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:01.885 Removing: /var/run/dpdk/spdk3/config 00:30:01.885 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:01.885 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:01.885 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:01.885 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:01.885 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:01.885 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:01.885 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:01.885 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:01.885 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:01.885 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:01.885 Removing: /var/run/dpdk/spdk4/config 00:30:01.885 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:01.885 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:01.885 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:01.885 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:01.885 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:01.885 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:01.885 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:01.885 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:01.885 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:01.885 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:01.885 Removing: /dev/shm/bdev_svc_trace.1 00:30:01.885 Removing: /dev/shm/nvmf_trace.0 00:30:01.885 Removing: /dev/shm/spdk_tgt_trace.pid1030366 00:30:01.885 Removing: /var/run/dpdk/spdk0 00:30:01.885 Removing: /var/run/dpdk/spdk1 00:30:01.885 Removing: /var/run/dpdk/spdk2 00:30:01.885 Removing: /var/run/dpdk/spdk3 00:30:01.885 Removing: /var/run/dpdk/spdk4 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1028693 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1029444 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1030366 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1030803 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1031493 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1031633 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1032346 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1032362 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1032604 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1033911 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1034836 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1035142 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1035342 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1035673 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1035861 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1036018 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1036196 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1036480 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1036802 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1039774 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1039939 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1040112 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1040230 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1040625 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1040679 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1041105 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1041187 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1041401 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1041541 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1041703 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1041841 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1042212 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1042369 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1042682 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1042848 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1042880 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1043065 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1043223 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1043380 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1043654 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1043816 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1043969 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1044242 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1044406 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1044563 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1044836 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1044998 00:30:01.885 Removing: /var/run/dpdk/spdk_pid1045151 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1045428 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1045587 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1045744 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1046017 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1046179 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1046356 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1046617 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1046774 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1047051 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1047127 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1047331 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1049513 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1075878 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1078640 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1085492 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1088795 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1091277 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1091681 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1095650 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1099622 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1099629 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1100167 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1100831 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1101485 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1101884 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1101893 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1102032 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1102167 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1102173 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1102822 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1103393 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1104031 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1104547 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1104554 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1104702 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1105689 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1106831 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1112410 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1112693 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1115325 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1119028 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1121086 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1127472 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1132796 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1133996 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1134660 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1145484 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1147823 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1173426 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1176338 00:30:01.886 Removing: /var/run/dpdk/spdk_pid1177438 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1178717 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1178858 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1178993 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1179129 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1179575 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1180891 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1181506 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1181934 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1183542 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1183967 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1184410 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1186923 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1192822 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1195506 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1199980 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1200923 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1202146 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1204768 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1207054 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1211394 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1211398 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1214305 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1214439 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1214575 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1214856 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1214972 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1217726 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1218066 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1220729 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1222587 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1226024 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1229330 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1236293 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1240637 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1240639 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1252852 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1253256 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1253696 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1254192 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1254775 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1255185 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1255600 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1256117 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1258626 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1258795 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1262566 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1262734 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1264347 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1269886 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1270007 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1272782 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1274255 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1275701 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1276441 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1277971 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1278741 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1284143 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1284537 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1284931 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1286366 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1286768 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1287165 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1289487 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1289625 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1290968 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1291453 00:30:02.144 Removing: /var/run/dpdk/spdk_pid1291460 00:30:02.144 Clean 00:30:02.145 16:11:29 -- common/autotest_common.sh@1451 -- # return 0 00:30:02.145 16:11:29 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:02.145 16:11:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:02.145 16:11:29 -- common/autotest_common.sh@10 -- # set +x 00:30:02.145 16:11:29 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:02.145 16:11:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:02.145 16:11:29 -- common/autotest_common.sh@10 -- # set +x 00:30:02.402 16:11:29 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:02.402 16:11:29 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:02.402 16:11:29 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:02.402 16:11:29 -- spdk/autotest.sh@391 -- # hash lcov 00:30:02.402 16:11:29 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:02.402 16:11:29 -- spdk/autotest.sh@393 -- # hostname 00:30:02.402 16:11:29 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:02.402 geninfo: WARNING: invalid characters removed from testname! 00:30:34.483 16:11:56 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:34.483 16:12:00 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:37.018 16:12:03 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:40.307 16:12:06 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:42.906 16:12:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:46.191 16:12:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:48.721 16:12:15 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:48.980 16:12:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.980 16:12:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:48.980 16:12:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.980 16:12:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.980 16:12:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.980 16:12:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.980 16:12:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.980 16:12:15 -- paths/export.sh@5 -- $ export PATH 00:30:48.980 16:12:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.980 16:12:15 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:48.980 16:12:15 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:48.980 16:12:15 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721052735.XXXXXX 00:30:48.980 16:12:15 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721052735.7U0a6y 00:30:48.980 16:12:15 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:48.980 16:12:15 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:48.980 16:12:15 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:48.980 16:12:15 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:48.981 16:12:15 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:48.981 16:12:15 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:48.981 16:12:15 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:48.981 16:12:15 -- common/autotest_common.sh@10 -- $ set +x 00:30:48.981 16:12:15 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:48.981 16:12:15 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:48.981 16:12:15 -- pm/common@17 -- $ local monitor 00:30:48.981 16:12:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:48.981 16:12:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:48.981 16:12:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:48.981 16:12:15 -- pm/common@21 -- $ date +%s 00:30:48.981 16:12:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:48.981 16:12:15 -- pm/common@21 -- $ date +%s 00:30:48.981 16:12:15 -- pm/common@25 -- $ sleep 1 00:30:48.981 16:12:15 -- pm/common@21 -- $ date +%s 00:30:48.981 16:12:15 -- pm/common@21 -- $ date +%s 00:30:48.981 16:12:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721052735 00:30:48.981 16:12:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721052735 00:30:48.981 16:12:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721052735 00:30:48.981 16:12:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721052735 00:30:48.981 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721052735_collect-vmstat.pm.log 00:30:48.981 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721052735_collect-cpu-load.pm.log 00:30:48.981 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721052735_collect-cpu-temp.pm.log 00:30:48.981 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721052735_collect-bmc-pm.bmc.pm.log 00:30:49.916 16:12:16 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:49.916 16:12:16 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:30:49.916 16:12:16 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:49.916 16:12:16 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:49.916 16:12:16 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:49.916 16:12:16 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:49.916 16:12:16 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:49.916 16:12:16 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:49.916 16:12:16 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:49.916 16:12:16 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:49.916 16:12:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:49.916 16:12:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:49.916 16:12:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:49.916 16:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:49.916 16:12:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:49.916 16:12:16 -- pm/common@44 -- $ pid=1301799 00:30:49.916 16:12:16 -- pm/common@50 -- $ kill -TERM 1301799 00:30:49.916 16:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:49.916 16:12:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:49.916 16:12:16 -- pm/common@44 -- $ pid=1301801 00:30:49.916 16:12:16 -- pm/common@50 -- $ kill -TERM 1301801 00:30:49.916 16:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:49.916 16:12:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:49.916 16:12:16 -- pm/common@44 -- $ pid=1301803 00:30:49.916 16:12:16 -- pm/common@50 -- $ kill -TERM 1301803 00:30:49.916 16:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:49.916 16:12:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:49.916 16:12:16 -- pm/common@44 -- $ pid=1301833 00:30:49.916 16:12:16 -- pm/common@50 -- $ sudo -E kill -TERM 1301833 00:30:49.916 + [[ -n 944750 ]] 00:30:49.916 + sudo kill 944750 00:30:49.926 [Pipeline] } 00:30:49.946 [Pipeline] // stage 00:30:49.952 [Pipeline] } 00:30:49.971 [Pipeline] // timeout 00:30:49.978 [Pipeline] } 00:30:49.999 [Pipeline] // catchError 00:30:50.005 [Pipeline] } 00:30:50.026 [Pipeline] // wrap 00:30:50.034 [Pipeline] } 00:30:50.054 [Pipeline] // catchError 00:30:50.066 [Pipeline] stage 00:30:50.068 [Pipeline] { (Epilogue) 00:30:50.082 [Pipeline] catchError 00:30:50.083 [Pipeline] { 00:30:50.096 [Pipeline] echo 00:30:50.097 Cleanup processes 00:30:50.102 [Pipeline] sh 00:30:50.386 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:50.386 1301938 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:50.386 1302063 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:50.406 [Pipeline] sh 00:30:50.696 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:50.696 ++ grep -v 'sudo pgrep' 00:30:50.696 ++ awk '{print $1}' 00:30:50.696 + sudo kill -9 1301938 00:30:50.708 [Pipeline] sh 00:30:50.991 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:00.975 [Pipeline] sh 00:31:01.266 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:01.266 Artifacts sizes are good 00:31:01.281 [Pipeline] archiveArtifacts 00:31:01.289 Archiving artifacts 00:31:01.497 [Pipeline] sh 00:31:01.785 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:01.801 [Pipeline] cleanWs 00:31:01.812 [WS-CLEANUP] Deleting project workspace... 00:31:01.812 [WS-CLEANUP] Deferred wipeout is used... 00:31:01.820 [WS-CLEANUP] done 00:31:01.822 [Pipeline] } 00:31:01.847 [Pipeline] // catchError 00:31:01.860 [Pipeline] sh 00:31:02.145 + logger -p user.info -t JENKINS-CI 00:31:02.155 [Pipeline] } 00:31:02.175 [Pipeline] // stage 00:31:02.183 [Pipeline] } 00:31:02.195 [Pipeline] // node 00:31:02.200 [Pipeline] End of Pipeline 00:31:02.230 Finished: SUCCESS